I recently experienced a problem with SharePoint 2010 and FAST content sources which turned out to be a quick fix – once I had spent over an hour tying to figure out what was causing the issue.
FAST and SharePoint 2010 have been working perfectly on this particular development server since installation. Recently the content sources had stopped working, yet nothing significant had changed on the server.
The symptoms were as follows:
- Content sources start crawling and never stop.
- Manually stopping the content source crawl hangs on “Stopping”.
- No documents are ever actually crawled
- Numerous errors in windows application log: ‘Failed to initialize session with document engine: Unable to resolve Contentdistributor‘
Having spent some time Googling the issue and checking my server it turns out there is a good reason and an easy fix.
During installation on development servers a self-signed certificate can be created for communication between FAST and SharePoint. It turns out that the self-signed certificate is only valid for one year and when it expires the above problems will occur. Unfortunately there is no mechanism making it obvious to the user that the certificate has expired, hence the potential for confusion.
The fix is to generate and deploy new self-signed certificate and this can be achieved with the following steps:
- Make sure the FAST Search for SharePoint & FAST Search for SharePoint Monitoring windows services are stopped.
- Open the FAST PowerShell on the FAST server as an administrator.
- Navigate to the FAST directory (i.e. <FASTSearchFolder>\installer\scripts)
- Run the following command: .\ReplaceDefaultCertificate.ps1 -generateNewCertificate $true
- This will generate a new certificate, valid for one year in the following folder: <FASTSearchFolder>\data\data_security\cert\FASTSearchCert.pfx
- This certificate will need to be copied to the SharePoint server (if running a multi-server environment).
- Start the FAST Search for SharePoint & FAST Search for SharePoint Monitoring windows services.
- Now the certificate needs to be loaded on the SharePoint server.
- Open the SharePoint PowerShell as an administrator on the SharePoint 2010 server.
- Navigate to the location of the SecureFASTSearchConnector.ps1 script (this script may need to be copied from the FAST server as mentioned in step 6).
- Run the following command (userName should reflect the details of the user running the SharePoint Server Search 14 (OSearch14) windows service):.\SecureFASTSearchConnector.ps1 –certPath “path of the certificate\certificatename.pfx” –ssaName “name of your content SSA” –username “domain\username”
Assuming there were no errors when running the PowerShell scripts the SharePoint server certificate has been deployed and will be valid for another year. The content sources will begin working normally again.
It is possible to set a 100 year expiration on the certificate and that process is detailed on Mikael Svenson’s blog:
These links helped me to get to the bottom of this issue:
I was having a frustrating problem recently with a brand new site collection in SharePoint 2010. Each time I attempted to create a new FAST Search Centre site an unexpected error would quickly occur with a different correlation ID each time.
This new site collection was in a web application with other site collections where the FAST Search Center sites had been sucessfully created.
After attempting to create the FAST Search Center site several times without success and with very little useful information in the event logs I turned to Google and found the folowing blog post with some useful advice:
The problem was because I had not enabled the ‘SharePoint Server Publishing Infrastructure site collection feature.
Once this feature was enabled I was able to create the FAST Search Center site successfully. It was a quick fix but if I had not found the above blog post it could have taken me much longer to get to the bottom of the issue.
Accessing the Global Assembly Cache using Windows Explorer
This is something I have to look up every time I need to do it as I always managed to forget, so I thought it may be something that is useful to other users.
By default it is not possible to browse the physical DLLs held within the GAC (Global Assembly Cache) using Windows Explorer as it automatically uses the built in Assembly Cache Viewer to display a view of the installed assemblies along with their version, culture, token and architecture information. You don’t get to see the actual folder structure contained within it.
If you want to see the actual folders with in the assembly or access the physical DLL files then there are a couple of different methods to achieve this but the easiest by far (in my opinion) is to simply run the following command at the command prompt to map a drive to the GAC (NOTE: replace [X] with the drive letter you wish to use for browsing your GAC folder):
SUBST [X]: “C:\Windows\assembly”
SUBST Z: “C:\Windows\assembly”
The SUBST DOS command allows a drive letter to be mapped to a physical path and this will provide access to the contents of the assembly by browsing to the drive letter we selected, in this instance drive Z:\.
When accessing the newly mapped drive you will be presented with a selection of folders and you will be able to browse for the specific DLL of interest. The location will depend on the name of the assembly and whether it is an x86 or x64 DLL but it should be easy enough to location using this method.
You will now have full access to all of the physical DLLs held in the cache which can be useful in a number of situations.
A situation I have observed a number of times while working with the SharePoint object model is that workflows don’t appear to be triggered on update calls.
In the past I have worked around this issue by simply adding some code to loop through the registered workflow items and manually triggering them which has always solved the problem.
I have read recently that the reason the workflow fails to run is due to it being run on a separate thread and that quitting the web app/console app before the asynchronous workflow threads have finished causes them to abort and from the end users perspective the workflow appears not to have run. This is a known bug in SharePoint but there are a number of workarounds so it’s not considered a show stopper.
It is however possible to work around this limitation by calling SPSite.WorkFlowManager.Dispose() after the item update. This will wait for the workflow threads to complete before exiting.
Some useful links for further information:
Posted in .Net, C#, MOSS, SharePoint, SharePoint 2010, Workflow, Workflow
Tagged .Net, c#, MOSS, SharePoint, SharePoint 2010, workflow
Sometimes when working with the Search Core Results web part within SharePoint it may be useful to see all of the XML that is being returned to the XSL stylesheet.
It is possible to see the raw XML by replacing the XSL stylesheet with the following XSL (be sure to back-up the existing stylesheet first, if it has been modified):
<?xml version=”1.0″ encoding=”UTF-8″?><xsl:stylesheet version=”1.0″ xmlns:xsl=”http://www.w3.org/1999/XSL/Transform”><xsl:output method=”xml” version=”1.0″ encoding=”UTF-8″ indent=”yes”/><xsl:template match=”/”><xmp><xsl:copy-of select=”*”/></xmp></xsl:template></xsl:stylesheet>
This can be modified by editing the Search Core Results web part properties. The XSL Editor is listed under the Display Properties heading.
Search Core Results web part screen shot
These MSDN links may be useful for further information:
How to: View Search Results XML Data
How to: Change the Properties Returned in the Core Search Results
I recently experienced an issue with a server which had me scratching my head for a couple of hours. Fortunately my colleague had previously experienced this issue and promptly directed me to a Microsoft KB which allowed me to resolve the issue.
In Windows 2003 SP1 and above (IIS 5.1+) Microsoft built a new security feature into IIS to prevent reflection attacks. This feature looks at the FQDN or custom host header being used and if it differs from the local machine name you may receive access denied or unauthorised errors when services call themselves locally.
With regard to SharePoint problems can surface with indexer access issues or any web service calls to the local machine.
There are a couple of fixes available (which are described in the Microsoft KB linked below) and both involve registry updates:
Method 1: Specify host names (Preferred method if NTLM authentication is desired)
Method 2: Disable the loopback check (less-recommended method)
I chose the second option for brevity (but this may not be the best option for your situation). Before making any changes to the registry it is worth taking a moment to back it up.
In order to disable the loopback check a DWORD key named DisableLoopbackCheck with a value of 1 must be added to the following registry path:
Restart the IIS Admin Service (a reboot may be required before these changes take effect) and the new settings should be in place. With the loopback check disabled the FQDN or custom host header no longer causes a problem.
Take a look here for more detailed information from Microsoft:
Posted in IIS, IIS 7, Registry, Search, Security, SharePoint, SharePoint 2010, Windows
Tagged IIS, IIS 7, MOSS, registry, search, security, SharePoint, SharePoint 2010, windows
I was recently reminded of a problem I had previously experienced but subsequently forgotten with regard to MOSS content crawling.
In this particular instance I was using a custom protocol handler but I believe this issue will apply to the OOTB MOSS Search functionality as well. I was attempting to crawl a file share using a custom protocol handler and kept receiving a myriad of misleading error messages in the application event log, including:
The specified address was excluded from the index. The crawl rules may have to be modified to include this address. (0x80040d07)
as well as:
The update cannot be started because all of the content sources were excluded by crawl rules, or removed from the index configuration.
It turns out that if the Office SharePoint Server Search windows service credentials are changed this corrupts the password of the default content access account being used by the SharePoint search SSP.
To resolve this issue I went to SharePoint Central Administration > Search SSP (i.e. SharedServices1 etc) > Search administration > Default content access account and re-entered the crawling account password.
After doing this everything was peachy, no more errors in the event log.