Skip Ribbon Commands
Skip to main content
SharePoint

SP2013 Blog

Jan 10
Installation and Configuration of Access Services 2013 (On Premise)

When I first started on my journey to install and configure Access Services 2013 for SharePoint I had to visit quite a few different articles in order to get the process and procedure "Correct". With this in mind I have decided to pull together a guide on configuring and installing Access Services 2013 in the hope that it can save someone considerable time and effort – reducing the trawl across the internet for multiple pieces of information, based on obscure errors in ULS.

All the documents I have encountered/used are listed at the bottom of this post, and I have no doubt that you will come across most of them already (all credit to the original authors where due).

Pre-requisites

As you are most likely aware, Access Services 2013 will require a fully functioning application model development environment, and I'm assuming that as you have SP2013 installed and the App Domain configured that you're rocking SQL 2012 as well.

When access services creates "Apps" it creates a DB, which in itself must reside in SQL. If you are thinking of letting your development base create their own Access 2013 applications then I highly recommend that you use a different SQL instance (at the very least) to house your Access Services DBs. For the purposes of this article I will be using the same instance as SharePoint (for ease of setup), but this should never happen in production and certainly is not best practice.

You must also ensure that you have a secure store application created and populated with a key generated. As it is now 2014 I'm going to assume that you all have this service app created, if not a good old fashioned Bing search should provide adequate guidance on setting this up.

Installing and Configuring Access Services 2013 (on premise)

When configuring Access Service 2013 for on Premise I do it in 5 stages:

  1. Configuring SQL
  2. Configuring Usage Account
  3. Configuring the Application Server
  4. Creating the Service Application
  5. Testing

Stages 1 & 2 generally overlap as the user account has to be granted access to certain elements with SQL, but I will run through the steps in order.

Stage 1 – Configuring SQL Server

First of all you will need to make sure that your instance has the following elements installed (do this via SQL installation media if required):

Database Engine Services        

Full-Text and Semantic Extractions for Search

SQL Management Tools

Client Tools Connectivity

 

Right click the main node for the SQL server, and select properties > Advanced:

Ensure that the following is set:

 

Enable Constrained Databases: True

 

All Triggers to Fire Other: True

 

Default Language: English

 

 

Now ensure that the Server Authentication mode is set to SQL Server and Windows Authentication. On SQL server node right click, properties > Security:

Set SQL authentication mode:

 

SQL Server and Windows Authentication mode

 

Next we need to ensure that named pipes is enabled. Open SQL Server Configuration Manger. Client Protocols > Named Pipes:

Client Protocols

 

Named Pipes: Enabled

 

After this you must make sure you the RESTART SQL SERVICE for the change to take effect.

If you have a firewall enabled on your SQL box (you should) you will need to add rules to allow communication through for 1433 and 1434, both TCP and UDP - 2 inbound rules (SQL TCP and SQL UDP) turn on for both Domain and private

Stage 2 – Configuring Usage Account

Next up create a new service account for use with Access Services. Mine will be called SP2013Access. This account will be used to create the application pool for Access Service 2013 and will be the account that accesses the SQL instance. Due to the added memory footprint when creating multiple app pools I usually only have one or two application pools. As Access Services requires special permissions on SQL and App server box I usually create Access Services 2013 with its own account, as I only want these special permissions available to Access Services 2013 and not to any unnecessary services. This account cannot be a farm account!

On the SQL box that will house your DBs for Access Services 2013, Set service account perms in SQL - dbcreator, public and secadmin *** very important *** - without this you will get generic access denied errors (see troubleshooting steps at the end of this document)

Open Security > Logins. Add your user as a login and assign it the following Server Roles:

Server Roles:

 

Dbcreator

 

Public

 

Securityadmin

 

Next (and probably most controversially) we need to grant the SP_Data_Access role on the SharePoint Config DB to our SP2013 Access account. In the days of SP2010 the Add-SPSHellAdmin command used to grant users the DB owner right. In SP2013 this is no longer the case. The Add-SPShelladmin command grants SP_Data_Access role, which means we don't have to edit the SharePoint Config DB as some other sources originally suggested.

The service account will also need to access the config cache (c:\ProgramData\Microsoft\SharePoint\Config\<GUID>) on the SharePoint servers and to do this the account needs to be a member of WSS_ADM_WPG group on each SharePoint server – when running the Add-SPShelladmin commandlet the service account is automatically added to this group on all SharePoint servers, effectively killing two birds with one stone.

From Tech net (http://technet.microsoft.com/EN-US/library/cc678863.aspx ):

SP_DATA_ACCESS database role

The SP_DATA_ACCESS role is the default role for database access and should be used for all object model level access to databases. Add the application pool account to this role during upgrade or new deployments.

The SP_DATA_ACCESS role replaces the db_owner role in SharePoint 2013.

The SP_DATA_ACCESS role will have the following permissions:

Grant EXECUTE or SELECT on all SharePoint stored procedures and functions

Grant SELECT on all SharePoint tables

Grant EXECUTE on User-defined type where schema is dbo

Grant INSERT on AllUserDataJunctions table

Grant UPDATE on Sites view

Grant UPDATE on UserData view

Grant UPDATE on AllUserData table

Grant INSERT and DELETE on NameValuePair tables

Grant create table permission

To add the SP_Data_Access role and add the service account to WSS_Admin_WPG on each server, run the following PowerShell command once - from any SharePoint server (Afterwards open SQL and confirm on the Security settings on the Config DB that the role has been applied):

Add-SPShelladmin –Username Sundown\SP2013Access

 

 

Now we must make sure that the service account has access to the App Management Service App.

Go to Central Admin > Manage Service Applications > Select the App Management Service APP > Choose your Account > Add > Tick Full Control > OK

Add SP2013 Access service account

 

Assign Full control

 

Press ok

 

Stage 3 – Configuring the Application Server

On the application server/servers that are running these services you will need to install the following components from the SQL 2012 Feature Pack which can be located here:

http://www.microsoft.com/en-us/download/details.aspx?id=29065

  • Microsoft SQL Server 2012 Local DB (SQLLocalDB.msi)
  • Microsoft SQL Server 2012 Data-Tier Application Framework (Dacframework.msi)
  • Microsoft SQL Server 2012 Native Client (sqlncli.msi)
  • Microsoft SQL Server 2012 Transact-SQL ScriptDom (SQLDOM.MSI)
  • Microsoft System CLR Types for Microsoft SQL Server 2012 (SQLSysClrTypes.msi)

 

If newer versions of these elements already exist on your app server the stick with what you have. In most cases SQLDOM.MSI and sqlncli.msi are already installed

 

Stage 4 – Creating the Service Application

Register your service account in Central admin

Go to central admin > Security > Registered Service Account

Register Service Account

 

Now we are ready to create a service application.

Go to central admin > Manage Service Applications > New > Access Services and fill out as follows:

Create the service application as follows:

 

Name: Application Name

 

Application Database Server: SQLServer\Instance

 

Create New Application Pool

 

Use your Registered Service Account for Access Services 2013

 

**** Potential Gotcha! ****

When creating the service application, if you are using a named instance (SQLServer\InstanceName) then you will receive this error:

"A connection could be established to the Application Database Server but mixed mode authentication isn't enabled."

This happens when creating the service application through the UI, as it tries to use SQL mixed mode auth, but you are using your CA app pool accounts (windows auth) and hence it fails.

So in essence if you are using a named SQL instance for Access Services 2013 you will need to provision the service app using powershell. This can be achieved by running the following commands:

New-SPServiceApplicationPool -Name <ACCESS SERVCIES APP POOL NAME> -Account <Domain\AccessServicesAccount>

$AppPool = Get-SPServiceApplicationPool | ?{$_.Name -eq "<ACCESS SERVCIES APP POOL NAME>"}

$DBServer = "SQLSERVERNAME\INSTANCENAME"

$saName = "Access Services 2013"

New-SPAccessServicesApplication -ApplicationPool $AppPool -DatabaseServer $DBServer -Name $saName -Default

$accserv = Get-SPServiceApplication -name $saName

New-SPAccessServicesApplicationProxy -application $accserv

 

Once the service app is created you are ready to continue

 

Start the Service – Go to Service on Server and start the Service on the desired application server

Start Access Service on the desired server

 

Once the service is deployed we need to set the service account on the newly created app pool for Access Services to load the User Profile on each server that has the Microsoft SharePoint Foundation Web Application feature enabled (Usually all your Front ends and Visual Studio App Servers). Once complete don't forget to recycle the app pool.

Set Load User Profile: True

 

Now we need to set the permissions on our web application so that we can process identities. From any SP Server run the following command (do this this for each web app that will use Access Services 2013 – not required on the app domain, just content web apps):

$wa = Get-SPWebApplication http://<YourWebAppName>

$wa.GrantAccessToProcessIdentity("DOMAIN\sp2013access")
$wa.Update()

 

Once you have completed all of the above steps you are ready to start testing. Make sure you have a Site Collections (Based on Teams) available to house the initial connection to your Access Services 2013 app.

Stage 5 – Testing

All testing must be completed on a non SP machine and preferably a client. If you're solely in a development environment and short on machines then installing the Office 2013 professional plus client on SQL works – but not recommend of course J

Open you're Access 2013 client > Select Custom App > Type in the name of the App and the URL to the site collection that you want to house your Apps:

 

Now navigate to the newly created app (Launch App in Access Client) and launch the Access 2013 Client using the hyperlink provided (this proves two way communication), then add tables as you desire – as a quick test search for 'Order' and add the default table. Saving the db automatically uploads it to your app:

 

That should conclude your setup for Access Service 2013 J

Troubleshooting

The testing section seems relatively straight forward, but in reality when trying to implement Access Service 2013 things rarely go smoothly. The majority of the issues that you will face will either be caused by permissions at the various levels or issues with the App model.

Below are my top tips when troubleshooting Access Service 2013 installation

Check those permissions! – If you trawl through ULS when the Access 2013 app is created you will note that it creates a local copy of the app first then publishes it to the app model – if you can see that the local copy has been completed, but the publishing of the app fails, then nine times out of ten this is a permissions issues (usually denoted with a can't find Db or app error). I can't stress enough that you should re-check your permission allocation:

    Does your Access Services account (App Pool Account) have a log on to the SQL box?

    Have you granted the service account SPShellAdmin?

    Have you given it permissions to the web app (GrantAccessToProcessIdentity)?

Service App Won't create – A common mistake on app creation is in the section for Database when creating the Service Application people tend to enter the name of a db i.e. wss_content_Access2013 – in reality this section needs to know your DB server name and instance name that you will store DB's for access services. This may seem logical, but you would not believe the amount of people that automatically default to conditioned behaviour and add a DB name out of habit.

Make sure the Service is started – Make sure that the service is started on the server you want to run Access Services 2013 – and also make sure that the components are installed on it, and if any other servers have the service started – make sure that the components are installed on there also.

A word on bindings – In some fringe cases I have seen IISBindings applied for the app model, in this case the default binding in IIS must be a wildcard – otherwise your app will create just fine, but will give 404 when's rendered

Access Service 2013 a Common Correlation Error

This error crops up more than most and can be very misleading when it comes to trouble shooting. If you get to this point, I implore you to go back through your permissions tree and make absolutely certain that you have applied the permissions at all the relevant levels

ULS Log Entries – Very misleading!

 

0x19EC    SharePoint Foundation    Topology    e5mb    Medium    WcfReceiveRequest: LocalAddress: 'http://<SERVERNAME>app1.sundown.local:32843/8fa2832f90bf4b81b469938b44333246/AppMng.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: 'http://schemas.microsoft.com/sharepoint/soap/IAppManagementServiceApplication/GetAppManagementDatabaseMap' MessageId: 'urn:uuid:7e97d3fb-6976-4dc3-a82c-90f44a4ab814'    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x19EC    SharePoint Foundation    Database    afjqz    Medium    The following range is retrieved for database AppServiceDB while constructing the database map: Range Start: 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00, Range End: NULL, Lower Sub-Range Mode: ReadWrite, Lower Sub-Range Point NULL, Upper Sub-Range Mode: ReadWrite, Upper Sub-Range Point: NULL.    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x19EC    SharePoint Foundation    Monitoring    b4ly    Medium    Leaving Monitored Scope (ExecuteWcfServerOperation). Execution Time=0.7861    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://<SERVERNAME>app1:32843/ceddfc4a1b864a3480714df7bae26ddc/AccessService.svc.    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    SharePoint Foundation    General    adyrv    High    Cannot find site lookup info for request Uri http://<SERVERNAME>app1:32843/ceddfc4a1b864a3480714df7bae26ddc/AccessService.svc.    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    SharePoint Foundation    Topology    e5mb    Medium    WcfReceiveRequest: LocalAddress: 'http://<SERVERNAME>app1.sundown.local:32843/ceddfc4a1b864a3480714df7bae26ddc/AccessService.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: 'http://schemas.microsoft.com/office/Access/Server/WebServices/AccessServerInternalService/AccessServiceSoap/GetHealthScore' MessageId: 'urn:uuid:f7838f4b-d03f-4f38-ac2b-70ac1c165691'    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    SharePoint Server    Logging Correlation Data    xmnv    Medium    ECS RequestId=516    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    Access Services 2010    Data Layer    00000    Medium    ExcelService.LogRequest: starting request of type GetSynchronousHealthScore. Caller ip=192.168.0.121    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    SharePoint Server    Logging Correlation Data    xmnv    Medium    User=0#.w|sundown\sp2013service    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    Access Services 2010    Administration    9m0a    High    [Forced due to logging gap, cached @ 01/10/2014 08:04:13.67, Original Level: Verbose] Tried to obtain setting {0} from Conversion Service Application, but it didn't exist.    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    Access Services 2010    Data Layer    00000    High    [Forced due to logging gap, Original Level: Verbose] OperationQueue.ExecuteContainedOperationComplete: Got completion event after execution of {0}    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    SharePoint Server    Logging Correlation Data    xmnv    Medium    Result=Success    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    Access Services 2010    Data Layer    00000    Medium    ExcelServiceBase.EndProcessOperation: Called. UserOperation was finished synchronously: True    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    Access Services 2010    Data Layer    00000    Medium    UserOperation.Dispose: Disposing Microsoft.Office.Access.Server.DataServer.Operations.EmptyOperation, WebMethod: GetSynchronousHealthScore.    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x18D4    SharePoint Foundation    Monitoring    b4ly    Medium    Leaving Monitored Scope (ExecuteWcfServerOperation). Execution Time=97.4512    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x2208    SharePoint Foundation    Topology    e5mb    Medium    WcfReceiveRequest: LocalAddress: 'http://<SERVERNAME>app1.sundown.local:32843/07bcb35dd8d0442199684785004794c2/AccessService.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: 'http://schemas.microsoft.com/office/Access/2010/11/Server/WebServices/AccessServerInternalService/IAccessServiceSoap/GetHealthScore' MessageId: 'urn:uuid:1dabf4f4-9504-4aca-8871-8ac7db87aa90'    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

0x2208    SharePoint Foundation    Monitoring    b4ly    Medium    Leaving Monitored Scope (ExecuteWcfServerOperation). Execution Time=0.1766    a1d5689c-6b3e-00c2-8cc9-9a1c20db1998

 

Get the full error from Event Viewer (Microsoft Office Alerts):

 

Fix:

This is usually caused because the service account does not have the required permission at the sq instance level – in this scenario – we forgot to add the dbcreator role – so it's worth checking!

 

Reference

None of this post would have be possible without the reference material provided by these previous posts on the subject. I would like to thank the original authors for their time and effort - great work people!

http://blogs.msdn.com/b/kaevans/archive/2012/08/30/configuring-access-services-2013-on-premises.aspx

http://blogs.msdn.com/b/kaevans/archive/2013/07/14/access-services-2013-setup-for-an-on-premises-installation.aspx

http://technet.microsoft.com/en-us/library/jj714714.aspx

http://mmman.itgroove.net/2013/05/sharepoint-2013-access-services - Hints and Tips

http://www.microsoft.com/en-us/download/details.aspx?id=30445 - Whitepaper for install

http://www.microsoft.com/en-us/download/details.aspx?id=29065 - SQL 2012 Feature Pack

http://technet.microsoft.com/en-us/library/ee906548(v=office.15) - Access Service PowerShell Commands

http://technet.microsoft.com/EN-US/library/cc678863.aspx - Account perms in 2013

Sep 30
Workflow Manager 403 forbidden

After the configuration of the workflow manager you may try and hit the service descriptor and receive a 403 forbidden error.

This is caused by the AdminGroup being left as the default BUILTIN\Administrators. To test this if your account is a local admin (or you run IE with elevated privileges) you should be able to render the page, but any other account fails.

This is because the request is trying to be elevated using local admin privileges from a non-privileged account and is being blocked.

Using the following commands lets you can view the AdminGroup

$Farm = Get-WFfarm

$Farm.AdminGroup

Looking at the, it appears that the definitions for the property it appears to have a get and a set – intimating that it can be changed, but unfortunately the farm itself does not have an update property so any changes made will not stick. The only way to I've found to rectify this so far is to:

 

  • Rip down the farm (remove host from farm and delete the 6 dbs, 3 SB and 3 WF)
  • Make sure the workflow service account is a local administrator
  • Create a Workflow Farm Managers group
  • Install Workflow Manager again, this time making sure the Admin Group is set to the Workflow Managers group

     

One other point of caution – after adding the permissions I often find it pertinent to reboot to ensure they are picked up correctly.

If anyone out there has found a way to alter the AdminGroup via powershell, then please let me know – I'll be all ears J

Aug 07
SharePoint 2013 Starting Services on Multiple Servers using PowerShell

These script blocks are intended to be used to start services on multiple servers as a time saving exercise. The three below are an example of endpoints that can be enabled across the whole farm but feel free to chop and change as necessary

 

#####################################################################################

#Initial Config

#Used to illustrate what servers the services should be started on - Uses a match on the first portion of server names to pull #back all FE's or App's as appropriate

#For instance if we have ten servers, 3 WFEs all called LIVEWIREWFE1,2,3 etc and 7 APP servers called #LIVEWIREAPP1,2,3 etc and we wanted to only start

#service on the WFEs - we would set the $CompName to LIVEWIREWFE - if we wanted to start the #services on only the app servers we would set it to LIVEWIREAPP

#if we wanted to start services across all servers then we would use LIVEWIRE

#####################################################################################

   

$CompName = "CONSP1"

   

####################################################################

######### Managed Meta Data Service - Starts on all Servers #################

####################################################################

   

Write-Host "Managed MetaData Service Endpoints BEFORE provisioning:"-foregroundcolor red -backgroundcolor yellow

get-SPServiceInstance |?{$_.TypeName -eq "Managed MetaData Web Service"}

$Servers = Get-SPServer | ?{$_.Address -Match $CompName}

foreach ($Server in $Servers)

{

Write-Host "Starting MMS Service Instance on " $Server -foregroundcolor red -backgroundcolor yellow

Get-SpServiceInstance -Server $Server |?{$_.TypeName -eq "Managed MetaData Web Service"} | Start-SPServiceInstance

}

Start-Sleep -s 5 #Sleeps for 5 seconds to allow provisioning to complete

Write-Host "Managed MetaData Service Endpoints AFTER provisioning:"-foregroundcolor red -backgroundcolor yellow

get-SPServiceInstance |?{$_.TypeName -eq "Managed MetaData Web Service"}

   

   

####################################################################

######### Secure Store Service - Starts on all Servers ########################

####################################################################

   

Write-Host "Secure Store Service Endpoints BEFORE provisioning:"-foregroundcolor red -backgroundcolor yellow

get-SPServiceInstance |?{$_.TypeName -eq "Secure Store Service"}

$Servers = Get-SPServer | ?{$_.Address -Match $CompName}

foreach ($Server in $Servers)

{

Write-Host "Starting Secure Store Service Instance on " $Server -foregroundcolor red -backgroundcolor yellow

Get-SpServiceInstance -Server $Server |?{$_.TypeName -eq "Secure Store Service"} | Start-SPServiceInstance

}

Start-Sleep -s 5 #Sleeps for 5 seconds to allow provisioning to complete

Write-Host "Secure Store Service Endpoints AFTER provisioning:"-foregroundcolor red -backgroundcolor yellow

get-SPServiceInstance |?{$_.TypeName -eq "Secure Store Service"}

   

####################################################################

######### BDC - Starts on all Servers ########################

####################################################################

   

Write-Host "BDC Endpoints BEFORE provisioning:"-foregroundcolor red -backgroundcolor yellow

get-SPServiceInstance |?{$_.TypeName -eq "Business Data Connectivity Service"}

$Servers = Get-SPServer | ?{$_.Address -Match $CompName}

foreach ($Server in $Servers)

{

Write-Host "Starting BDC Instance on " $Server -foregroundcolor red -backgroundcolor yellow

Get-SpServiceInstance -Server $Server |?{$_.TypeName -eq "Business Data Connectivity Service"} | Start-SPServiceInstance

}

Start-Sleep -s 5 #Sleeps for 5 seconds to allow provisioning to complete

Write-Host "BDC Endpoints AFTER provisioning:"-foregroundcolor red -backgroundcolor yellow

get-SPServiceInstance |?{$_.TypeName -eq "Business Data Connectivity Service"}

   

 

If you come get to the point where you want to stop all instances across all servers then feel free to use this:

 

$CompName = "<SERVER Name>"

$Servers = Get-SPServer | ?{$_.Address -Match $CompName}

foreach ($Server in $Servers)

{

Get-SpServiceInstance -Server $Server |?{$_.TypeName -eq "<INSERT SERVICE INSTANCE NAME>"} | Stop-SPServiceInstance

}

 

So there you have it – hopefully this will save a little time on your builds.

Aug 07
SharePoint 2013 Core Service Application Provisioning

When creating a new farm I often find the creation of the core service applications a very repetitive task. The use of the configuration wizard is never really an option as a sea of DB GUIDs littering SQL is not desirable. So for some time now (ever since R12 really) I've been using pre-packed scripts to provision all the core service apps into their own DB's. Once I've finished deploying the bits to all the servers I update the $DB with the alias info and App pool info and just run the script. It's designed to create them all in order of dependency and in larger farms the service apps can take a few seconds to register so to make sure things go smoothly I pause the script for 5 seconds after each service app is provisioned. Feel free to reduce as necessary. All the service apps will use the same SharePoint Web Services application pool.

It must be noted that these core service apps are being provisioned into the default proxy group for later use in a services farm. In later posts I'll be providing the multi-tenancy variant.

Here's the script:

 

########## CORE SERVICE APPLICATIONS ########

########## Initial Details

########## Set the Instance name/alias for SQL

########## Gets the SharePoint Web Services app pool - does match as this app pool can end in either root or system

 

Asnp * #adds the PowerShell snap in - just in case you're not running this from management shell

$DB = "CONPRI"

$saAppPool = Get-SPServiceApplicationPool | ?{$_.Name -Match "SharePoint Web Services"}

   

######### SESSION STATE SERVICE #########

$stateDB = New-SPStateServiceDatabase -Name Session_State_Service

$state = New-SPStateServiceApplication -Name "Session State Service" -Database $stateDB

New-SPStateServiceApplicationProxy -Name "Session State Service Proxy" -ServiceApplication $state -DefaultProxyGroup

Start-Sleep -s 5

   

######## MMS ##########

New-SPMetadataServiceApplication -ApplicationPool $saAppPool -Name "Managed Metadata Service" -databasename MMS -DatabaseServer $DB

$mmsent = Get-SPServiceApplication -Name "Managed Metadata Service"

New-SPMetadataServiceApplicationProxy -Name "Managed Metadata Service Proxy" -ServiceApplication $mmsent

Start-Sleep -s 5

   

########## SECURE STORE ##########

New-SPSecureStoreServiceApplication -ApplicationPool $saAppPool -Name "Secure Store Service" -AuditingEnabled -DatabaseName "Secure_Store_Service" -DatabaseServer $DB

$secstore = Get-SPServiceApplication -Name "Secure Store Service"

New-SPSecureStoreServiceApplicationProxy -Name "Secure Store Service Proxy" -ServiceApplication $secstore

Start-Sleep -s 5

   

   

########## WORK MANAGEMENT #########

New-SPWorkManagementServiceApplication -ApplicationPool $saAppPool -Name "Work Management"

$workman = Get-SPServiceApplication -name "Work Management"

New-SPWorkManagementServiceApplicationProxy -name "Work Management Proxy" -ServiceApplication $workman

Start-Sleep -s 5

   

   

##########WORD AUTOMATION ##########

New-SPWordConversionServiceApplication -Name "Word Automation" -DatabaseName "Word_Automation" -DatabaseServer $DB -ApplicationPool $saAppPool

$wordapp = Get-SPServiceApplication -Name "Word Automation"

# No PROXY setup needed

Start-Sleep -s 5

   

   

########## BDC ##########

New-SPBusinessDataCatalogServiceApplication -ApplicationPool $saAppPool -Name "Business Data Catalog" -DatabaseServer $DB -DatabaseName "Business_Data_Catalog"

$bdc = Get-SPServiceApplication -Name "Business Data Catalog"

###New-SPBusinessDataCatalogServiceApplicationProxy -Name "Business Data Catalog Proxy" -ServiceApplication $bdc ## proxy not needed, but check on creation

Start-Sleep -s 5

   

   

########## MACHINE TRANS ##########

New-SPTranslationServiceApplication -ApplicationPool $saAppPool -Name "Machine Translation Service" -DatabaseServer $DB -DatabaseName "Machine_Translation_Service"

$machtrans = Get-SPServiceApplication -name "Machine Translation Service"

##New-SPTranslationServiceApplicationProxy -name "Machine Translation Service Proxy" -ServiceApplication $machtrans ## proxy not needed, but check on creation

Start-Sleep -s 5

   

   

##########SUBSCRIPTION SETTINGS ##########

New-SPSubscriptionSettingsServiceApplication -ApplicationPool $saAppPool -DatabaseName Subscription_Settings -DatabaseServer $DB -Name "Subscription Settings"

$subset = Get-SPServiceApplication -name "Subscription Settings"

New-SPSubscriptionSettingsServiceApplicationProxy -ServiceApplication $subset

Start-Sleep -s 5

   

   

##########APP MANAGEMENT ##########

New-SPAppManagementServiceApplication -ApplicationPool $saAppPool -DatabaseName APP_Management -DatabaseServer $DB -Name "App Management"

$appman = Get-SPServiceApplication -name "App Management"

New-SPAppManagementServiceApplicationProxy -Name "App Management Proxy" -ServiceApplication $appman

Start-Sleep -s 5

   

##########Lists out all the Service apps that were provisioned

Get-SpServiceApplication

Aug 05
Configuring SharePoint 2013 failover partner settings using PowerShell

 

This post is short and sweet and should hopefully cover everything you need to know about retrieving and setting the failover partner status for your databases

 

To get the current settings for any single database you can use the following command:

 

Get DB single DB Info

 

Get-SPDatabase | select name, server, failoverserver | ?{$_.name -match "Profile_DB"}

   

To get the failover partner settings for all DBs you can use the following:

 

Get Failover Information for all DBs

 

Get-SPDatabase | select id, name, server, failoverserver|out-Gridview

 

Set Failover Information for all DBs

 

To set the failover partner settings for a particular DB use the following block – replacing with your settings as appropriate:

 

$db = get-spdatabase <DB_GUID_HERE> #this is your GUID from get-spdatabase command from above

$db # this is to verify you have the right DB

$db.AddFailoverServiceInstance("YOUR_SQL_FAILOVER_")# insert your failover SQL server

$db.Update() # Updates the settings

 

To confirm the settings have taken you can re-run the list all command:

 

Get-SPDatabase | select id, name, server, failoverserver|out-Gridview

 

Hopefully you someone will find this useful J

Aug 03
Configuring SP2010 SSRS using SQL2012 for a multi-server environment

 

Common errors this post helps rectify:

Error: Report Server WMI Provider error: Invalid namespace

Error: "Installation Error: Could not find SOFTWARE\Microsoft\Microsoft SQL Server\110 registry key"

 

 

If you are deploying SSRS using SP2010 and SQL2012 (multi-server environment) it is key to note that the new SP2013 mechanism of configuring SSRS as a service app is applied. Just to call this out in its truest sense, you no longer use SSRS configuration manager or the central admin options for reporting services. Instead you create a Service Application.

So if you are seeing the above error, you are trying to configure SSRS the SP2010 and SQL2008 way – when you need to configure it using the SP2010 and SQL2012 method – make sense?

All the guidance on this subject assumes a single farm and not a multiple farm initial deployment using sql2012 and there is very little out there for a multi farm environment.

When you do a multi-server deployment, the sql box will be installed and configured first, which means the SSRS component hasn't had chance to register with the SharePoint machines. The upshot of this is that the following commands don't work:

Install-SPRSService

Install-SPRSServiceProxy

If you look at the instructions for the Reporting Services add-in it clearly states that the add-in should be ran on a server running a SharePoint product. Although our SQL box is joined to the farm it shouldn't have SharePoint installed on it – which means we have to add BOTH the add-in and the Reporting Services for SharePoint on an application or web front end server. If you just install the add-in on an app or wfe server then when starting the service application you will receive this error:

"Installation Error: Could not find SOFTWARE\Microsoft\Microsoft SQL Server\110 registry key"

 

So to be clear – the high level steps for setting up SSRS in SP2010 using SQL2012 in a multi-server environment are:

Install SQL as you would normally – without 'Reporting Services – SharePoint' or the add-in

Then Install and configure Your SharePoint farm as normal

Once your SharePoint farm is up and running and you are ready to progress, log on to your application/WFE and use the SQL 2012 installation media to add the 'Reporting Services – SharePoint' and the add-in to the application/WFE servers of your choice.

The SSRS PowerShell commandlets will now be registered.

Progress with the SSRS installation guide below from the section "Install and Start the Reporting Services SharePoint Service":

http://msdn.microsoft.com/en-us/library/gg492276.aspx

Another note of caution here is that for SSRS to work the SharePoint reporting services add in has to be installed on EVERY web front end server.

 

Useful Resources:

Install Reporting Services

http://msdn.microsoft.com/en-us/library/gg492276.aspx#bkmk_create_serrviceapplication

Troubleshoot Reporting Services

http://msdn.microsoft.com/en-us/library/ms144289.aspx

Aug 02
Content Type Syndication Hub – Technical Overview

Originally posted on 13 March 2012

In its most basic sense the Content Type Syndication Hub (CTSH) provides a mechanism for centralising control of content types across multiple site collections and web applications.

It does this by "Promoting" one designated site collection to become the Content Type Syndication Hub (CTSH). Any subsequent content types added to this "Promoted" site collection will be published out to all other "Consuming" site collections.

The CTSH is part of the Managed Metadata Service application and as such any site collection or web application that uses this service application will also consume content types from the CTSH.

You can only have one CTSH per Managed Meta Data Service Application and once set it cannot be changed. If you want to move the CTSH after inception then you must create/use a brand new Metadata Service Application.

However a web application may consume content types from more than one CTSH by subscribing to multiple metadata services. In the case of any conflicts the Default Managed Metadata Service Web app will win (that is to say the content type on the default will be the version pushed to the consuming site collection).

Content Type Syndication Hub – Known Limitations

Non supported column types – Custom Fields and external data columns are not supported in syndicated content types. They can be added to the original content type on the hub, but the column will not be pushed out.

Workflows – Workflows are NOT supported with syndicated content types, although once deployed workflows can be attached to them in normal fashion, but after every republish the association will be dropped and has to be re-established.

To configure the Content Type Syndication Hub

The initial Configuration of the CTSH is relatively straight forward and should be the first element checked when trouble shooting.

Step 1 - Provision the CTSH in the managed metadata service

  • In central admin, go to Managed Service Applications
  • Focus (not select) the managed metadata Service
  • Enter the URL that is going to be the CTSH
  • Focus (not select) the Managed Metadata service Proxy
  • Choose the desired options

Step 2 -Enable the Service on the site

  • Go to the proposed CTSH site
  • Enable the CTSH Feature at the site collection level

Step 3 - Run the timer Jobs

  • Run the Content Hub Timer job (More on this later)
  • Run the Content Subscriber Timer Job (More on this later)

Content Hub Job – Runs every 15 minutes by default – Gathers up all content types that have been set to publish for the first time or republish

Content Type Subscriber Job – Runs at 59 minutes past the hour by default – (One of these jobs exist per web app) This job runs and completed an update check per content type as well as pushing "Consuming" any new content type

Sealed and unsealed content types

When a content type is "Consumed" by a site collection that content type becomes "Sealed" – which in essence means read only. This prevents the end user altering the content type and making the Hub unstable.

If the Content Type is retraced (Unpublished) then the site collection will not receive any more updates for that content type but the content type will be set to the site scope and will become "Un sealed". End users will then be able to edit this content type as it is site specific.

This is to ensure that the removal of content types from a hub does not lose any associated data.

Removing a Content Type from a Site completely

As mentioned above the use of unsealed content types has its advantages, but if you are looking to remove the content type completely from the site then you will need to go through a more hands on process.

First of all you must remove all instances of the content types from any lists or document libraries in the site/site collection. Once this has been done this you can delete the content type from the site collection Gallery.

As you can imagine with multiple content types, libraries and sites this process can take some time. The Powershell code below can be ran against your site collection to help find all documents (and their libraries) where a specific content type is used – which should aid your search considerably.

   

#add-pssnapin Microsoft.sharepoint.powershell

   

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") > $null

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint.Publishing") > $null

   

$url = "http://SITECOLLECTIONURLGOESHERE"

$site = new-object Microsoft.SharePoint.SPSite ($url)

$contenttype = $site.rootweb.contenttypes["Document"] # Change this to represent the desired CT

ForEach ($web in $site.AllWebs)

{

ForEach ($list in $web.Lists)

{

if ($list.BaseTemplate -eq "DocumentLibrary")

{

ForEach ($item in $list.Items)

{

If ($item.ContentType.Name -eq $contenttype.Name)

{

"DocURL:"+" /" + $item.Url + " CT: " + $item.ContentType.Name

}

}

}

}

}

   

Common Errors

Newly published Content Types not appearing in site collections

This is by far the most common of all errors with the CTSH and it can be caused by a number of reasons. Trouble shooting steps for this error should be to review all configuration settings and then test syndication to the effected site using the subscriber job for that web application only.

In the first instance check that the issue is reproducing in PPE and if possible rectify there first and confirm any repro steps

After this check the Content Type Syndication hub error log:

Next check the log Content type publishing log for specific publishing errors

These steps should outline the exact nature of the fault and allow you to troubleshoot further.

If the logs are showing clear then use the following steps to confirm syndication is operating correctly:

  • Check CTSH settings in Central Admin (as outlined in the screenshots in Section 1 above)
  • Ensure you can surface the CTSH site collection
  • Make sure that content types in the CTSH are set to publish
  • Check currently running Jobs in Central Admin - this will show if any long running subscriber jobs are present

If you are still having issues then the creation of a test content type may be necessary but make sure you gain customer approval before deploying it.

  • Create a CTSH Content Type
  • Set the Content Type to Publish
  • Run the Content Subscriber job for that web application – alternatively wait till 59 minutes past the hour when the job will run as scheduled.

If the test content type syndicates correctly you have confirmed that the CTSH is operating as expected and the fault lies with the Content Type itself.

Columns missing from Syndicated Content Types

The CTSH does not support Custom Fields and external data columns but it WILL publish the rest of the content type. This can lead to the customer reporting columns are missing. In this instance there is nothing that can be done until the offending columns are removed from the content type at the CTSH level.

Content not syndicating to blank site

You may get a customer call about content types not be syndicated and on checking you cannot find the Content Publishing Hyperlink. This is more than likely because the site has been created using the blank site template.

By default the blank site does not have the taxonomy feature enabled and this cannot be enabled through the interface. To enable the feature use either of the following methods:

   

Via powershell

Enable-SPFeature -id 73EF14B1-13A9-416b-A9B5-ECECA2B0604C -URL <SiteCollectionURL>

   

Via STSADM

activatefeature -id 73EF14B1-13A9-416b-A9B5-ECECA2B0604C -url http://<server> -force

Aug 02
UPA Persistent Stuck on Starting Issue

Originally posted on 22 January 2012

The UPA Stuck on Starting journey – a trip into FIM hell!

There are many great articles that list various steps to clear "stuck on starting" in FIM – including the defacto bible of Stuck on Starting by Spence Harber:

http://www.harbar.net/articles/sp2010ups2.aspx

However in certain circumstances the usual steps of clearing this issue don't always work.

Recently I had an issue where an OOTB SharePoint 2010 farm (No service packs or CUs applied) had un-provisioned itself.

When trying to re-provision, the one off timer job that does this did not materialise and I was left in a state of "Stuck on Starting" but with no timer job, all services stopped and the application online. This became very frustrating and no matter what I tried I could not get rid of the "Starting" state.

To rectify this I used the following method in this order:

Customer was using OOTB SharePoint config – with a named SQL instance so by default the service would not re-provision until the June CU or a hot fix was applied (see here for error details)– in this case the customer was not ready to go to sp1 so the hotfixes had to be applied:

After this is noticed the "Starting" was still persistent in mange services on server and when trying to start the FIM services manually - I got a slew of errors - Eventids: 22, 3, 26, 2, 3 :

Running though the normal steps to clear this issue did not remediate it. Im going to list the steps here for brevity as in most cases they would of worked:

  • Stop User profile service and User profile sync service on ALL SharePoint servers in the farm
  • Clear timer config cache by following the "resolution" documented at http://support.microsoft.com/kb/939308
  • Deleted the Pending timer job related to the UPA sync service provisioning using Central administration->Monitoring->check job status
  • Confirmed that the security token service has only windows and anonymous enabled ( from IIS manager->SharePoint web services->Security token service->Authentication (under IIS))
  • On the SharePoint server access the registry editor and confirm that the information related to FIM ( database servername , sync DB name etc ) are properly populated, if found wrong , back up the registry key and edit the values to reflect the right information.
  • HKLM\system\currentcontrolset001\services\FIMservice
  • HKLM\system\currentcontrolset002\services\FIMservice
  • HKLM\system\currentcontrolset\services\FIMservice
  • Delete all FIM related certificates from certificate store ( for account and system) . Make sure that no FIM related certificate is listed in any folders when looked at certificate manager.
  • From Central administration->Application management->Manage service application, Make sure the User profile service application and its proxy are started, If found stopped, its always recommended to recreate them taking necessary backups (mysites)
  • Start User profile service on the machine where you would like to start the User profile sync service (use central administration console)
  • Once the service is started, try starting the User profile sync service from the central administration site

So as you can see there were quite a few steps without success.

If you use the Get/Stop SPService instance you will be able to see the state SP thinks the UPA service is in. In this case there were 3 three UPA services, 2 of them stopped, but one of the them was "Stuck" in provisioning!

In the SP2010 command shell use the following command to list them:

get-SPServiceInstance | where {$_.TypeName -like "*User*"}

If you see an instance that is "Provisioning" you will need to stop it as it is this that is causing the issue:

Stop-SPServiceInstance -identity <GUID>

They should all now be disabled:

Success! You should note now that the service should be registered as stopped in the interface:

For good measure at this point I turned Verbose logging on for UPA (don't forget to turn it off when you're done). I then started ULS viewer and filtered category on "User Profiles" which helps to track the actual provisioning process so you can view errors as they occur.

Start User Profile Service through the interface as normal

You should be able to watch the process provisioning in its entirety (usually takes around 15 minutes)

Success!

I then went and recreated and my synchronisation connections and performed a full sync

Hope you find this useful!

Aug 02
Error deploying administration application pool credentials

Originally posted on 21 December 2011

When trying to change administrative credentials using the manage accounts interface in SharePoint 2010 I often come across this error:

   

Error deploying administration application pool credentials. Another deployment may be active. An object of the type Microsoft.SharePoint.Administration.SPAdminAppPoolCredentialDeploymentJobDefinition named "job-admin-apppool-change" already exists under the parent Microsoft.SharePoint.Administration.SPTimerService named "SPTimerV4".  Rename your object or delete the existing object.

   

When the managed accounts tab section is used to change the password it creates a timer job "job-admin-apppool-change" and if this is half way or has been corrupted (like it's been half way through and system reboot occurred) then the job "Sticks" and this can manifest itself as the error above.

The best and quickest way of rectifying this I have found is to delete the job and try again.

Use the following PowerShell to delete the job:

   

$tj = Get-SPTimerJob -Identity "job-admin-apppool-change"

$TJ.Delete()

 Now try reset your password again using the managed accounts link and you should be successful

Aug 02
User Display Name Set to Domain\Username – SharePoint 2010

Originally posted on 12 September 2011

In truth this error can be caused by a number of factors and there is not just one answer. The best way to approach this error is by working through all the possible reasons this could be happening and eliminating each step. This way you will get to the root cause and (ultimately fix) any underlying issues you have with your UPA/ContentDB synchronisation process.

To trouble shoot this issue properly we need to understand what is actually going on and what may be causing the issue – so here goes:

The Theory bit…

When dealing with user profiles it's very easy to get mislead with all the terminology and the path of what actually happens during the site logon process. So this next bit is intended to help clarify the process and hopefully lead us closer to a permanent fix.

When a user authenticates against a site it uses the information in the content database for that Site Collection. This information is kept in the userInfo table and is synchronised with the central User Profile Store by two Sync Jobs:

  • User Profile to SharePoint Full Synchronisation (hourly by default)
  • User Profile to SharePoint Quick Synchronisation (every 5 minutes by default)

The key to these jobs are in the names – which is also why they are routinely misunderstood. Most people think these are the normal full and incremental Active Directory jobs for the user profile application and with the dubious titles it is easy to understand why!

In reality these jobs take information FROM the User Profile Store to the SHAREPOINT content databases throughout the day and they are responsible for refreshing the information in those content databases with the information stored in the User Profile Store.

The incremental job is designed to run every five minutes and only synchronises users that have been set to active since the last synchronisation, whereas the full synchronisation job synchronises all the users' details in the content DB. The idea behind these jobs is to ensure that when a user logs on for the first time that they have to wait for the least amount of time possible before there details are updated (if the incremental job doesn't get you within five minutes the hourly quick job will J )

So when a site collection is setup for the first time in its shiny new DB the userInfo table lists who has, and hasn't got access to the site but until that person hits the site for the first time, the isActive flag for the user is set to zero.

The isActive flag is used by these two jobs to define whether or not the information for a user should be synchronised. So when a user hits the site for the first time, it sets this flag to 1 and the next quick or incremental job will pick this up and synchronise the information accordingly.

This means that when a user logs on for the first time the isActive flag is set to 1, but the information about the user will not be present – hence the display name is not set in this content database – and therefore the user account display name will be set to their user account name which is Domain\UserName.

Once the Full or Incremental jobs pick up the change (i.e. it sees that the user now has a 1 value for isActive) they sync the user's data from the User Profile Store (including the ever important display name) and et viola! On refresh the users display name is changed from Domain\UserName to Display Name (as well as making all the other profile information available like phone number etc.)

So from reviewing this process above and confirming the symptoms we can start to rectify any errors.

Defining the Symptoms – (Potential causes for User Showing as Domain Username)

The way to do this is to ask two simple questions which can go a long way to tracing down the exact source of the error:

  • If the user logs on to a different site collection (in a different ContentDB) does this still happen?

    This will help you define if the error is localised to the current site collection DB or is spanned across multiple Site Collection DBs. This will help us ascertain whether or not the full and incremental synchronisation jobs are working/running and if the issue is localised to one DB or is farm wide

  • Is the user set as active in the current site?

    This will tell us if the current site collection is expecting to have the information for the user. If the user has never logged onto the site or has been set to inactive (i.e. they haven't logged on for 30 days – which by default sets isActive back to zero) then they want have their Display Name set until the next Full or incremental job is ran, and then further to this if the jobs have been ran but the user has not been synced we can start to look elsewhere.

The following scripts lists the details of the userInfo Table sorted with the isActive Flag and login and title listed first to help you quickly determine if the user is active or not.

USE [NAME_OF_YOUR_CONTENT_DB_GOES_HERE]

GO

SELECT TOP 10 [tp_SiteID]

,[tp_ID]

,[tp_IsActive]

,[tp_Login]

,[tp_Title]

,[tp_Email]

,[tp_DomainGroup]

,[tp_SystemID]

,[tp_Deleted]

,[tp_SiteAdmin]

,[tp_Notes]

,[tp_Token]

,[tp_ExternalToken]

,[tp_ExternalTokenLastUpdated]

,[tp_Locale]

,[tp_CalendarType]

,[tp_AdjustHijriDays]

,[tp_TimeZone]

,[tp_Time24]

,[tp_AltCalendarType]

,[tp_CalendarViewOptions]

,[tp_WorkDays]

,[tp_WorkDayStartHour]

,[tp_WorkDayEndHour]

,[tp_Mobile]

,[tp_Flags]

FROM [NAME_OF_YOUR_CONTENT_DB_GOES_HERE].[dbo].[UserInfo]

The result should look like this:

How to resolve the issue

The information you have gained from applying the above logic should be pointing you in the right direction by now so here is a list of fix's that can be applied to help rectify the issue

Confirm the users AD account is legitimate and not locked

First and foremost make sure the user is active in AD. It may seem a little obvious but checking it may save you a lot of time – needless to say a user being disabled in AD will not be able to log on to the site.

Confirm the user is an active user of the site

As mentioned above, look at the userInfo table for the site collection DB for the site in questions and see if the user's isActive flag is set. From here you can also see all the other profile information that is available and this will be a good indication of whether or not the account is being synchronised.

Remove and Re-add the user in the site collection

Sometime the user's details may become stale. If you check the contentdb and they are set to active and the jobs are running fine and this issue is related to just one user then deleting and re-adding the user in quick succession will refresh the user data as it does another full import from ad for that user. Once the Full or Incremental jobs pick up the users change status they will synchronise the information again. Don't forget to make a note of the user's security group membership beforehand and add them back in once the operation is complete.

Checking the last time ContentDBs were synchronised

You can check the last time the content DBs were synchronised by using the following STSADM commands. The –listolddatabases command lists all databases that have not been synchronised longer then the day value that you provide. So in this example it lists all DBs that have not been synchronised in the last day:

stsadm -o sync -listolddatabases 0

If you subsequently use the –deleteolddatabases command this will delete the records in the userInfo table in the ContentDBs older than the day value given which means that on the next Full or Incremental sync the table will be totally repopulated. In this case any records not synchronised within the last day will be deleted and repopulated – THE DATABASES WILL NOT BE DELETED! Only the corresponding records in the table:

stsadm -o sync -deleteolddatabases 0

The language used in this command can come across as quite destructive but in all fairness it's essentially triggering a re-sync.

Sync the individual user from AD

You may come across occasions where individual accounts have not been synchronised and removing them and re-adding them from the site collection may not be acceptable.

n.b. This could be for a number of reasons but is most likely to involve the membership of lots of permission groups- so re-adding them to each might be a more labour intensive task than you are willing to undertake

In this scenario you can use the Set-SPUser –syncfromad command which essentially synchronises the user's information table record with the contents of Active Directory. The script below does this for a specific user in one site collection:

add-pssnapin microsoft.sharepoint.powershell

Set-SPUser -identity http://Siteyouwantsync/my -SyncFromAD

This may be the less destructive option for your scenario but please be warned that when the next full or incremental profile sync with AD should trigger an overwrite of this information. This shouldn't be a problem as you've synchronised the user with AD using this command anyway J

Sync All Users from AD

Like the above scenario you may have to re-sync all users in one particular site collection rather than just one user. In this scenario removing each user from the site collection would not be practical and you may have good reasons for not running the Full or Incremental sync jobs. This script iterates through the site collection in question and synchronises each user in turn with AD.

add-pssnapin microsoft.sharepoint.powershell

Get-SPUser -Web http://SiteToSync/my

Get-SPUser -Web http://SiteToSync/my | ForEach-Object {Set-SPUser -identity $_.name -web http://SiteToSync/my -SyncFromAD}

Once again these commands will be overwritten by the profile store information when the next full AD synchronisation takes place.

If in doubt - check FIM!

If by this stage you are still struggling with this issue then you may be having trouble with UPA/FIM and you may need to investigate your full user profile synchronisation and setup. In this scenario I'd refer you to rather brilliant set of blogs by Spencer Harbar at:

http://harbar.net/

Paying particular attention to:

http://www.harbar.net/articles/sp2010ups.aspx

The main point from here will be to check the FIM interface and do a MetaVerse search for your problem user – this will help determine the state of the user and indeed the last time the user's attributes were synchronised with AD.

The FIM client can be found here:

Systemdrive:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe

   

One Final Word of Caution…

In this post I've covered two highly contentious and volatile areas, namely:

  • Connecting to SharePoint contentDBs
  • Connecting to the FIM interface

Whilst connecting to these clients and viewing information will not harm your installation you must be aware that ANY changes to ANY SharePoint information contained within SharePoint databases is unsupported and any changes made in the FIM interface are also unsupported – so please, ensure you only use these methods to view information to aid troubleshooting and not as an actual troubleshoot tool! Sermon over… J

Hope someone finds this useful – feel free to send feedback via the comments form on the right hand side.

1 - 10Next
Untitled 1

Heath.jpg 

 

Hi, welcome to SP2013 Blog - formerly known as SP2010 Blog.  Hopefully you will find something useful on here - and if not then thanks for dropping by anyway :)  

Feel free to follow me on Facebook, Twitter or Linkedin - alternatively you can send feedback directly to my email address below

Regards 

Heath Groves MCM MCSM 

Heath.Groves@SundownSolutions.co.uk