Tag Archives: Windows

Windows Task Scheduler Keyset does not exist Error

Task Scheduler is one of those great little components that once you set it, you tend to forget it. One of the Windows 2003 servers I tend has been running scheduled tasks flawlessly for over a year until they suddenly stopped one day. Every time I went to open/edit and individual task’s properties, a dialog with the following message appeared:

General page initialization failed.
The specific error is:0x80090016: Keyset does not exist
An error has occurred attempting to retrieve task account information. You may continue editing the task object, but will be unable to change task account information.

A solution to this problem is not readily apparent, more so after the ubiquitous net search returns results that relate to Windows 2000, not 2003. After some playing, and with reference to the MS KB article http://support.microsoft.com/default.aspx?scid=kb;en-us;246183, I got the Task Scheduler working again doing the following:

  1. Stop the Cryptographic service
  2. Delete the contents of the C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\S-1-5-18 folder – as a precaution, I made a backup first.
  3. Start the Cryptographic service (a server restart may be required)
  4. Re-assign the Run As user account for every required scheduled task

Virtualization

I’ve been a fan of virtualization for a while and have played with extensively evaluated a number of different solutions over the last couple of years in both the desktop and server arena.  A few early incarnations did leave a lot to be desired in terms of overall functionality and reliability.  For the most part, however, they have all now become viable solution.

So what is Virtualization? Well, there already reams and reams of article, whitepapers and blog entries explaining this out there.  Inevitably some are very good, whilst others are biased towards one solution or another.  Some of the latter can descend almost into pure vitriol (VMWare & Microsoft blogs?).  The water is muddied somewhat further by the almost inevitable multiple usages and definitions of the term Virtualization.  WikiPedia has an entire page listing the varying types of Virtualization available.

From my perspective, the best way to describe and explain virtualization is by the typical end-product: the more efficient and manageable usage of IT resources.  There are lots of other advantages – hardware independance (aka separating runtime code from physical hardware), perceived increased resilience, snapshots, easier backups  – but what normally makes the case is greater efficiency and utilisation of IT resources.  In other words, the dreaded Return on Investment (ROI).

Consider this:  one of the most common forms of Virtualization is that of Platform Virtualization where the operating system is separated from the hardware upon which it is running. Instead of adopting the traditional route of installing an operating system directly onto the computer’s hard disk, it is installed into a software container known as Virtual Machine (VM).  This virtual machine is hosted by a piece of software called a HyperVisor.  The Hypervisor sits between the VM and the physical hardware of the host computer.  Instead of allowing the VM direct access to the host computer’s hardware, it provides a virtual hardware infrastructure upon which the Virtual Machine runs.  You are not limited to one Virtual Machine per Hypervisor.  In Platform Virtualization, you may have several Virtual Machines all running concurrently.

So why is this of any use?

I’ve had this question quite a few times, and this is best explanation I have come up with so far for Platform Virtualization.  It is a bit vanilla in nature, but I feel that it is a good general explanation:

Organisation A has three identical servers, all of whom never utilise more than 30% of their total resources.  In essence, 70% of the capacity of each server is wasted.  With Virtualization, all three servers could be converted to three Virtual Machines and then hosted on one physical server.  Doing this will save the organisation both money and space:  they will only be paying for the running costs of 1 physical server instead of three and they will only require the space of one server.

I use Platform Virtualization extensively.  Typically I have 3 or 4 virtual machines running at any one time, two of which run permanently. Whilst the majority are for testing, the latter two are crucial to my day-to-day operations as one is my Spam Filter and the other is a monitoring server.

As both a developer and a sysadmin, Virtualization has made the process of testing and evaluation a whole lot easier.  If I look back to the heady days of 2000/2001, the company I was then working for maintained an extensive suite of computers of varying vintages running a multitude of operating systems (side note:  I’m still trying to work out why we were testing a multimedia CD on a Sun Sparc).  When you figure in the space required, power usage and time required to manage and maintain such a setup, the costs do start to add up.  For me, the arrival of virtualization has all but nullified this requirement.  Instead of having a stack of PCs lying around, I now have a stack of Virtual Machines.

What do I use?

As I mentioned above, I have evaluated a substantial number of solutions, including amongst others VMWare Server, Microsoft Virtual Server and  Sun VirtualBox.  After going through all of these, and taking into consideration my internal requirements – I’m not a datafarm remember – I have chosen to use Microsoft Hyper-V Server.

My choice of Hyper-V is not because I am some sort of evangical Microsoft user.  Truth be told, whilst I tend to use MS products the most of the time, my operating decisions are based on the sound engineering principle of using the right tool for the job in hand.  Consequently, two of my production Virtual Machines are running Ubuntu Linux, not Microsoft Windows.  I would be lying if I didn’t say that the cost – Hyper-V is free – was a factor, but at the end of the day I cannot justify the capital expenditure of VMWare’s equivalent operation for my own uses.

To date, I am very impressed with Hyper-V, even more so considering I am running the Release Candidate of Hyper-V R2.  It has been rock-solid it terms of reliability.  The only problems I have really experienced have been with Microsoft Hyper-V Manager, but these were more down to Window’s security systems than anything else.  It’s not a perfect solution – there is no local GUI so all management has to be done via a remote tool (Hyper-V Manager) or via the command line, but for the price and feature set, I’m not complaining.

As I expand my usage of Hyper-V, I will post further details of what happens especially with regards to Linux Virtual Machines and ongoing management.