Author Archives: Alexander John

About Alexander John

I am an IT Professional and Web Developer living and working in the United Kingdom. I describe myself as an IT generalist as I have worked in a wide variety of IT and web fields and technologies. Whilst I am Microsoft Certified Professional (MCP), I utilise a wide variety of technologies including Linux and Android in day-to-day work. I am a founder and director of a SME orientated Web service company Calzada Media Limited where my daily workload varies from web development through to IT infrastructure management.

Resetting Webmin password for Ubuntu

The inevitable happened: I forgot the password for Webmin on one of my Ubuntu servers.  In my defence, it is a server that I principally manage via SSH.  Being Ubuntu, the necessary files are not in the same place as Webmin documentation.  However, a quick Google search found a blog post with the solution.  For my own records, I am duplicating below:

  1. Open a shell or SSH session on the target server
  2. Enter the command
    • /usr/share/webmin/changepass.pl /etc/webmin username password
    • Where username = webmin username.  For me this was root
    • And password = new password.
  3. Login to Webmin to test

Thanks to and original information from http://ranawd.wordpress.com/2010/06/14/reset-webmin-password-for-ubuntu/

Given the number of password I seemingly have to remember, I think it is time that I employed some for of Password Safe.

Reset Webmin Password for ubuntu

13 Votes

I found this problem at the first time when webmin installed. Also this happens to everyone at one time or another. You go on vacation and when you come back, you forget all of the passwords on your computer.

I failed logging in several times and then Webmin blocked my IP. Getting localhost blocked isn’t a good thing to see. So I searched the Internet for a way to reset the password and I found this procedure:

1. Login to your computer as root. Type on browser address bar https://localhost:10000/
2. If you are running a Debian distribution (ubuntu), enter the following command:
/usr/share/webmin/changepass.pl /etc/webmin username password
3. Login to Webmin with your reset password.
4. eNjoY!

Remotely managing disks on Windows 2008/Hyper-V Server R2

As I’m mooting some hardware upgrades to my bash’n’crash Hyper-V server, I wanted to check out a few things with regards to it’s performance and general health.  I duly fired up Server Manager, but was thwarted when I tried to use Disk Management and got the message of RPC Server is unavailable.  I checked the Firewall, and the requisite rules (see below) were enabled.  I recycled the Virtual Disk Service and still had no luck.

After hunting around a while I found a Technet Forum post that covered this.  You have to enable the necessary inbound rules both on the server being managed and the managing computer.  Once I had done this, and restarted Server Manager, I could access Disk Management.

For reference, the two rules that need to be enabled are:

Remote Volume Management – Virtual Disk Service (RPC)

Remote Volume Management – Virtual Disk Service Loader (RPC)

As per the Technet post, I found that you don’t need the Remote Volume Management (RPC-EPMAP) rule enabled.  Needless to say, I only enabled both rules for the Domain firewall profile.

Automatically Loading PSHyperV Library

Late last year I wrote about getting the Powershell Management Library for Hyper-V (PSHyper-V) up and running on my Hyper-V installation. Although a newer version of the library was released in January, I simply hadn’t gotten around to updating my server.  If I’m honest, beyond the occasional restart following an update (or powercuts – thanks for nothing EDF Energy) all of the Hyper-V servers under my care tend to sit the corner and are generally forgotten about.  Far more attention is paid to the Virtual Machines than the actual host upon which they rely.

One change between the versions of PSHyperV has been the change from a standard script to a powershell module.  As I prefer for the library to be automatically loaded, this required a change to the user’s powershell profile.  The slight fly in the ointment is that I had forgotten how to do this, so here is a quick reprise for my own memory.

1.  Open Powershell.  If you are within a command prompt, type cmd /c start powershell to open a new powershell window.

2.  In Powershell, type $profile.  This will get you the full path to where your profile is stored.  Your profile is a powershell script that executes whenever a Powershell prompt is opened.  The profile will look something liek this:

C:\Users\<your username>\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

3.  Your profile file may or may not exist.  If you have not already used it elsewhere, the chances are that it doesn’t.  To create a new profile file, enter new-item $profile -itemtype file -force in Powershell.

4.  Now open the profile file.  If you are using Server Core or Hyper Server R2, you can still use Notepad (it is present, but missing a few features).  So enter notepad $profile in Powershell.

5.  Within Notepad, enter the Import-Module command for PSHyperV.  If you install PSHyperV using the supplied install.cmd, this path should be like

Import-Module “c:\Program Files\modules\hyperv\hyperv.psd1”

6.  Once done, save the file and exit Notepad.

7.  To test, open a new Powershell prompt and type Get-VM.  If everything has gone as planned, you will get a list of all of the VMs present on your Hyper-V server.

This process will only work for the current logged on user.  If you have multiple user accounts on Hyper-V, you will need to repeat this process for all that require access.

Finally, if you have not already done so you will need to set the execution policy for Powershell.  I’ve found that all you require is RemoteSigned.  To set this, enter set-executionpolicy remotesigned within Powershell.

Powershell Tip #1
In Powershell, type $profile.

PS C:\Program Files\Microsoft\AxFuzzer> $profile
C:\Users\mengli\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

That points to where your profile is stored at.  This is a powershell script that executes upon the start up of any powershell prompt for the current user.  Go ahead and make the file.  In my case, I made a new file at the location by typing this:

new-item $profile -itemtype file -force

Now, open the file and you can put in things like this:
set-executionpolicy unrestricted
. \\meng\shared\powershell\hyperv.ps1
set-executionpolicy remotesigned

Every new powershell prompt that you

Getting System Information from the command line or Powershell

Although there are any number of tools available to gather and collate information regarding the configuration and composition of a computer, these are often overkill when all you wish to learn is one or more basic details like the processor type, memory, computer name etc.

A quick shortcut is to gather this information from the command line or Powershell.  Windows XP and later includes in the systeminfo utility. Typing this at the command prompt will produce information similar to this:

c:\systeminfo
Host Name: SOMEPC
OS Name: Microsoft Windows 7 Ultimate
OS Version: 6.1.7100 N/A Build 7100
OS Manufacturer: Microsoft Corporation
OS Configuration: Member Workstation
OS Build Type: Multiprocessor Free
Registered Owner: SomeOwner
Registered Organization:
Product ID: 00428-321-7001132-70186
Original Install Date: 04/05/2009, 10:29:33
System Boot Time: 28/09/2009, 13:50:08
System Manufacturer: Dell Inc.
System Model: Latitude D820
System Type: X86-based PC
Processor(s): 1 Processor(s) Installed.
[01]: x86 Family 6 Model 14 Stepping 8 GenuineIntel ~2000 Mhz
BIOS Version: Dell Inc. A09, 04/06/2008
Windows Directory: C:\Windows
System Directory: C:\Windows\system32
Boot Device: \Device\HarddiskVolume2
System Locale: en-gb;English (United Kingdom)
Input Locale: en-gb;English (United Kingdom)
Time Zone: (UTC) Dublin, Edinburgh, Lisbon, London
Total Physical Memory: 3,326 MB
Available Physical Memory: 858 MB
Virtual Memory: Max Size: 8,313 MB
Virtual Memory: Available: 5,500 MB
Virtual Memory: In Use: 2,813 MB
Page File Location(s): C:\pagefile.sys
Domain: somedomain.lan
Logon Server: \\SOMEDC
Hotfix(s): 4 Hotfix(s) Installed.
[01]: KB958830
[02]: KB969497
[03]: KB970789
[04]: KB970858
Network Card(s): 3 NIC(s) Installed.
[01]: Intel(R) PRO/Wireless 3945ABG Network Connection
Connection Name: Wireless Network Connection
DHCP Enabled: Yes
DHCP Server: 10.10.0.1
IP address(es)
[01]: 10.10.0.100
[02]: fe80::901:8ac7:5a6b:1f56
[02]: Broadcom NetXtreme 57xx Gigabit Controller
Connection Name: Local Area Connection
Status: Media disconnected

If you are using Powershell – if not, why not? – the get-wmiobject win32_computersystem command will return rudimentary details regarding the host PC.

PoSH>get-wmiobject win32_computersystem
Domain              : somedomain.lan
Manufacturer        : Dell Inc.
Model               : Latitude D820
Name                : SOMEPC
PrimaryOwnerName    : SomeOwner
TotalPhysicalMemory : 3487690752

Windows Task Scheduler Keyset does not exist Error

Task Scheduler is one of those great little components that once you set it, you tend to forget it. One of the Windows 2003 servers I tend has been running scheduled tasks flawlessly for over a year until they suddenly stopped one day. Every time I went to open/edit and individual task’s properties, a dialog with the following message appeared:

General page initialization failed.
The specific error is:0x80090016: Keyset does not exist
An error has occurred attempting to retrieve task account information. You may continue editing the task object, but will be unable to change task account information.

A solution to this problem is not readily apparent, more so after the ubiquitous net search returns results that relate to Windows 2000, not 2003. After some playing, and with reference to the MS KB article http://support.microsoft.com/default.aspx?scid=kb;en-us;246183, I got the Task Scheduler working again doing the following:

  1. Stop the Cryptographic service
  2. Delete the contents of the C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\S-1-5-18 folder – as a precaution, I made a backup first.
  3. Start the Cryptographic service (a server restart may be required)
  4. Re-assign the Run As user account for every required scheduled task

Getting Powershell Management Library for Hyper-V (PSHyper-V) up and running

One of the great capabilities of Powershell is the ability to extend its’ functionality through new libraries. One I am currently playing with is the Powershell Management Library for Hyper-V, or PSHyper-V. For Server Core and Hyper-V Server users, the cmdlets contained within this library add a new dimension of functionality and capabilities, and enable admins to reduce their reliance on Hyper-V manager to perform otherwise simple tasks.

Although you can load the library upon demand, there is another method available whereby it may be preloaded as part of your windows profile.

  1. Download the latest recommended release of PSHyper-V from http://www.codeplex.com/PSHyperv
  2. Copy the contents of the ZIP archive onto your Hyper-V server. In my instance, these were to a folder called c:\powershell
  3. To get your profile path, type the following from with a Powershell prompt:

    $profile

    You will typically get something like:

    C:\Users\your_user_name\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

  4. The file Microsoft.PowerShell_profile.ps1 is the Powershell script that is executed upon the startup of any Powershell prompt for your user account. In most circumstances, this script doesn’t exist, so you will need to create it (it is worth checking first). To create the script, enter the following into a Powershell prompt

    new-item $profile -itemtype file -force

  5. Now edit this file and add the path to PSHyper-V.ps1 into it. In my example, this is as follows:

    . c:\powershell\PSHyper-V.psq

    Note the dot-space prior to the file path. This is required to execute the script.
    (Editing this file on server core or Hyper-V server can be a bit of trial. In the end, I did it remotely.)

  6. That’s it. To test it, either start a new Powershell prompt. If all is well, a list of the loaded PSHyper-V cmdlets will be shown.

Thanks to the author of the Technet blog post from which I have sourced most of the information for this.

Virtual Server to Hyper-V: Parallel port driver service failed to start error

I’m currently migrating a substantial number of virtual machines (VMs) from Virtual Server to Hyper-V server.  Although this is a fairly painless process, a common error for Windows VMs relates to the parallel port service.  The system eventlog error goes something like:

The parallel port driver service failed to start due to the following error:  The service cannot be started, either because it was disabled or because it has no enabled devices associated with it.

The reason for this is quite simple:  Virtual Server supported parallel ports, whilst Hyper-V doesn’t.  Typically, any basic VM created through VS will incorporate a virtual parallel port even once the Hyper-V integration components includes in the new HAL are installed.

There are two options here, you can either manually hack your way through Windows removing all traces of the parallel port, or much simpler, completely disable the parallel port service.  The latter is somewhat easier, and may be accomplished in a matter of minutes:

  1. Backup your registry
  2. Open Regedit
  3. Go to HKLM > System > CurrentControlSet > Services > ParPort
  4. Change the start parameter value to 4.
  5. Restart your server – optional, but I do this to confirm this has worked.

That’s it.  Upon the next restart, you should not get any service  alert for the parallel port.

Getting list of network interface in Linux

For those of us used to Windows management tools, getting a comprehensive list of data regarding installed hardware on a Linux box can be a little daunting.  Whilst recently migrating a virtualised ubuntu box, I needed to find out just what network hardware was in use.

Within a linux shell, type the following

lshw -class network

This will produce a full list of all installed network hardware.  For newbies, the define name you are probably looking for is called logical name.

Manually Setting DNS Server Addresses in Ubuntu (Linux)

No matter how many times I have installed and configured Linux, I can never remember the name of the configuration file that stores the DNS/Nameserver details.  This really only applies if your Linux machine is using a static IP address.  In most scenarios, it does not apply to DHCP clients.

DNS server settings are stored in /etc/resolve.conf  To edit this file, enter the following command from the shell
$sudo nano /etc/resolv.conf
(If you have installed XWindows/Gnome, you can use sudo gedit /etc/resolve.conf instead)

Add the entries for your DNS or nameservers as follows

nameserver <IP address of DNS server 1>
nameserver <IP address of DNS server 2>

etc…

Virtualization

I’ve been a fan of virtualization for a while and have played with extensively evaluated a number of different solutions over the last couple of years in both the desktop and server arena.  A few early incarnations did leave a lot to be desired in terms of overall functionality and reliability.  For the most part, however, they have all now become viable solution.

So what is Virtualization? Well, there already reams and reams of article, whitepapers and blog entries explaining this out there.  Inevitably some are very good, whilst others are biased towards one solution or another.  Some of the latter can descend almost into pure vitriol (VMWare & Microsoft blogs?).  The water is muddied somewhat further by the almost inevitable multiple usages and definitions of the term Virtualization.  WikiPedia has an entire page listing the varying types of Virtualization available.

From my perspective, the best way to describe and explain virtualization is by the typical end-product: the more efficient and manageable usage of IT resources.  There are lots of other advantages – hardware independance (aka separating runtime code from physical hardware), perceived increased resilience, snapshots, easier backups  – but what normally makes the case is greater efficiency and utilisation of IT resources.  In other words, the dreaded Return on Investment (ROI).

Consider this:  one of the most common forms of Virtualization is that of Platform Virtualization where the operating system is separated from the hardware upon which it is running. Instead of adopting the traditional route of installing an operating system directly onto the computer’s hard disk, it is installed into a software container known as Virtual Machine (VM).  This virtual machine is hosted by a piece of software called a HyperVisor.  The Hypervisor sits between the VM and the physical hardware of the host computer.  Instead of allowing the VM direct access to the host computer’s hardware, it provides a virtual hardware infrastructure upon which the Virtual Machine runs.  You are not limited to one Virtual Machine per Hypervisor.  In Platform Virtualization, you may have several Virtual Machines all running concurrently.

So why is this of any use?

I’ve had this question quite a few times, and this is best explanation I have come up with so far for Platform Virtualization.  It is a bit vanilla in nature, but I feel that it is a good general explanation:

Organisation A has three identical servers, all of whom never utilise more than 30% of their total resources.  In essence, 70% of the capacity of each server is wasted.  With Virtualization, all three servers could be converted to three Virtual Machines and then hosted on one physical server.  Doing this will save the organisation both money and space:  they will only be paying for the running costs of 1 physical server instead of three and they will only require the space of one server.

I use Platform Virtualization extensively.  Typically I have 3 or 4 virtual machines running at any one time, two of which run permanently. Whilst the majority are for testing, the latter two are crucial to my day-to-day operations as one is my Spam Filter and the other is a monitoring server.

As both a developer and a sysadmin, Virtualization has made the process of testing and evaluation a whole lot easier.  If I look back to the heady days of 2000/2001, the company I was then working for maintained an extensive suite of computers of varying vintages running a multitude of operating systems (side note:  I’m still trying to work out why we were testing a multimedia CD on a Sun Sparc).  When you figure in the space required, power usage and time required to manage and maintain such a setup, the costs do start to add up.  For me, the arrival of virtualization has all but nullified this requirement.  Instead of having a stack of PCs lying around, I now have a stack of Virtual Machines.

What do I use?

As I mentioned above, I have evaluated a substantial number of solutions, including amongst others VMWare Server, Microsoft Virtual Server and  Sun VirtualBox.  After going through all of these, and taking into consideration my internal requirements – I’m not a datafarm remember – I have chosen to use Microsoft Hyper-V Server.

My choice of Hyper-V is not because I am some sort of evangical Microsoft user.  Truth be told, whilst I tend to use MS products the most of the time, my operating decisions are based on the sound engineering principle of using the right tool for the job in hand.  Consequently, two of my production Virtual Machines are running Ubuntu Linux, not Microsoft Windows.  I would be lying if I didn’t say that the cost – Hyper-V is free – was a factor, but at the end of the day I cannot justify the capital expenditure of VMWare’s equivalent operation for my own uses.

To date, I am very impressed with Hyper-V, even more so considering I am running the Release Candidate of Hyper-V R2.  It has been rock-solid it terms of reliability.  The only problems I have really experienced have been with Microsoft Hyper-V Manager, but these were more down to Window’s security systems than anything else.  It’s not a perfect solution – there is no local GUI so all management has to be done via a remote tool (Hyper-V Manager) or via the command line, but for the price and feature set, I’m not complaining.

As I expand my usage of Hyper-V, I will post further details of what happens especially with regards to Linux Virtual Machines and ongoing management.