It’s an incredibly hard subject to fully qualify really but at a basic level you need to know what happens when you apply a schema update to your system. Like I say, I’ll go in to the ins/outs in upcoming articles – but a very basic level it’s a good start to have a copy of your system to work from for testing purposes.
Performing Schema updates on your live Active Directory – from a change control perspective it can fill you with fear can’t it? It’s one of the few functions that are forest wide, and effectively irreversible (bar a forest restoration process).
To be absolutely fair, they very rarely go wrong – and even when they do, hardly anybody notices. What’s going to go wrong? A system breaks in the middle of the update? Usually you can just run the update again and let it finish. The biggest risk is from ineffectual planning. I mentioned a while ago that a friend of mine had rather unfortunately committed a schema update to their live systems without doing the prep for a previous edition – and of course, this negated their ability to implement the platform they wanted. It wasn’t unrecoverable – but it was … a little stressful to do. Editing your schema is not for the light hearted. I’ll happily dive in on a demo or test platform – but live? I’d avoid that at all costs. Article here for example on removing an Exchange 2013 domain prep:
Removing an Exchange 2013 Schema Prep
It gives you an idea of what’s involved anyway.
I spend a lot of my time designing and being involved in the implementation of Microsoft Lync – and that always involves a schema update. I’m always slightly surprised at most company’s attitudes to schema updates. Pretty much 90% of them think ‘pah, what’s the worse that could happen’…and continue on regardless. A high percentage of them have little idea of what’s involved if it goes wrong.
My bigger clients usually have at least thought about it – some of them even tested recovery of the forest.
From a consultative point of view, I always try and include the ramifications of what’s about to happen with the recovery routes – I.e. attribute/schema updates, and how hard it is to roll back. On a tangent for a second – I get why a lot of techs see change control as a pain, and slowing down their day job – I try and look at it another way – it’s transference of risk and responsibility. If you make completely clear what is being done, what the risk is, and what the recovery method is – the decision to go ahead is with the business, not with the tech. As long as you’re broadly right you’ve made the business able to make a reasonable business decision against risk.
Anyway, I’m getting ahead of myself. Testing of schema updates, and forest recovery, are fairly key to any system really – and on that front I’m surprised by how many companies don’t do it. I suppose in some ways that goes to show how general reliable it is. I’ll be producing a number of articles on how to risk-assess, protect, and recover from schema updates – and this is just one of those base articles. There’s a lot of noise on the best ways of doing this out there though – so would be interested in what people think.
Let’s look at how you can create a copy of your Active Directory for testing purposes. It’s simple enough to do – and there’s a few ways of doing it. Firstly, let’s cover how to create a replica by implementing a new domain controller, synchronising it, then physically removing it and seizing the FSMO roles on that unit.
You can read about that process here:
Duplicating Active Directory for a Test Environment
Some people don’t like this method as they think it’s risky and a little dirty – and I kind of get why they would think that. It’s also possible to do a normal forest recovery type restore, and we’ll look at that next….
Author: Mark Coughlan
-
Testing Schema Updates
-
Automating Common Administrative Tasks
In my working life of talking to many companies about their technology usage, and their deployment plans, I tend to find that the needs & wants of the average Systems/User Administrator are often forgotten. This, I think, is a dangerous mistake to make with a number of technology deployments as it can lead to issues and frustrations with deployment & administration.
Spend some time making your Administration team’s lives simpler, and you’ll be repaid with faster turn around times, and fewer errors in administrative functions. Just for clarity on that last bit, I’m not suggesting that Administrators are error prone – far from it – but you ask anyone to manually configure telephony for 30 users (for example) and expect them to get them 100% right all the time – well, I think you’re asking for a lot.
In my mind, I tend to think that if you are doing something specific, with a pattern, and repeatable in a manual method, well, quite frankly you’re doing it wrong. Wrong in that it’s slow, and probably more importantly – it’s error prone.
Microsoft Lync, and Exchange, are prime examples. There are loads of PowerShell tools available for automating tasks, and for implementing certain functions, features, and processes fully automatically and with minimal input. The problem is though that they require scripting skills. A lot of Sys Admins are very comfortable with scripting – but it still takes time and effort. What about front-line user managers? The ones who set up users, who configure their telephony policy for example – do they know scripting? Do you WANT them to know hows to script-admin against your systems? You’d hope the risk on the last one would be negated by security policy of course, but that’s not really the point.
When I’ve worked on larger projects I’ve always tried to put effort in to simplifying take on processes, whether those take-on processes are migrations from legacy, or delivery of new services. Make it simple, make it repeatable – and what you achieve are fewer errors, and faster take on. Fewer errors means less headaches from support calls, and fewer unhappy users during migration/take-on. I’m uncertain on the less/fewer in that sentence.
How does that apply to Microsoft Lync & Exchange, my most common take-on/migration project? Well, there products have their own administration tools. Lync Control Panel for example. Having multiple tools does involve additional understanding and take-on from the administration staff. Admittedly it’s really not hard to administer users in Lync Control Panel – but it is something typically new, and it is something additional.
The other thing – and probably the real drive – is that most common tasks are utterly repeatable. Think about that – repeatable. The task to achieve the the end game is the same, all the time. If that doesn’t shout out automation I don’t know what does.
Setting up a new user in an organisation is a great example – Add the user in Active Directory User and Computers, add them to the right groups etc. That gives them their identity. Next, jump in to Exchange management and configure their mailbox. Then, jump in to Lync management and configure their Unified Comms stuff. Sure you can see where I’m going with this – it’s a faff. A repeatable faff that’s prone to error.
How do I fix this? Well, I extend ADU&C to automate the common tasks of:
- Configuring their Exchange mailbox
- Configuring their Lync environment
- Configuring their telephony in Lync etc.
There’s absolutely no reason that this cannot be extended to take-ons too, rather than new users. For example, with Lync Deployments, I often put in scripts that enable Administrators to enable groups of people, or in certain situations whole Organisational Units for example.
The result? Happier administrators, fewer take-on/enablement errors, fewer support calls, increased productivity, and a certain feeling of TECH AWESOME. You can’t argue with that can you?
The video below gives a run through of some of the Lync stuff – it will give you a good idea of what I mean. The hi-def version can be viewed by clicking here.