Monday, August 20, 2012

IP and Port Scanning From Windows Command-line

Ever need an IP or port scanner but didn't have one installed or the permission to install one?  Here's a quick trick I came up with using the little used built-in functions of the Windows Command-line:

for /L %A in (1,1,254) do ping -n 1 192.168.1.%A
FOR /L %A IN (1,1,254) DO FOR /L %B IN (1,1,1024) DO telnet 192.168.1.%A %B

Windows has a built-in for loop function and when used with the /L switch, it will act like a traditional counting for loop as in C and other program languages.

The code loops through values from 1 to 254, incrementing by 1 and pings 192.168.1.%A, where %A is the value of the loop variable.

The second line of code uses nested loops to telnet to IP addresses in the same range as above and port numbers from 1 to 1024 inclusive.  There is a hitch where if it connects it just hangs there, but you can only work with what you got.

Here's a sample run:

IP Scanner

Port Scanner

Sunday, August 19, 2012

Finding And Deleting Locked Files With Process Explorer

We've all had the problem with trying to delete a file and then getting an error similar to the following:

Now in this example, Windows was kind of enough to tell me the name of the program that is locking the file.  But this is not always the case.

In certain circumstances you aren't told what the name of the program is.  This is when the Sysinternals tool known as Process Explorer comes in handy.

Process Explorer is essentially a "super" Task Manager that allows you to get a more in depth view into what is going-on on your system.

There is a feature in Process Explorer called Handle search that allows you to search for open handles.  Handles are similar to file descriptors in Unix/Linux.

To find a locked file with Process Explorer do the following:

1) Open Process Explorer, hit Ctrl+F and type in the name or partial name of the file you're looking to unlock/close:
2) Double-click the result.  You will then be taken to the following window.  Right-click the handle and select "Close Handle".  Take heed of the warning and acknowledge it if you choose to.  You should then be able to delete the file.

Firewalls And nmap

nmap is one of the most popular security tools.  Chances are you already know what nmap is and what it does so I'll spare the introduction.

As part of  my IT Security self-education, I set out to see how nmap interacts with firewalls.  I chose iptables/netfilter on Linux since it's free and there are a lot of GUI tools that allow one to setup and configure it quickly.

nmap has many options for performing port scans, one of them being IP spoofing.  IP spoofing is commonly used in Denial-of-Service attacks to modify the source IP address of the packets to hide where the traffic is coming from.  When used it port scanning, IP spoofing isn't very useful as the traffic will never return to you since the source IP isn't the same as your source IP.  However, if you're on a local network, the ARP entry for the spoofed source IP will have your MAC address associated with it, so you will get the return traffic.

In my test setup I used the following:
  • Backtrack 5 R3 VM
  • Ubuntu 12.04 VM with iptables and Firestarter GUI
  • Debian 6 VM
  • Wireshark
  • nmap 6.01
I began by scanning with a version scan on port 22 of the Ubuntu VM.  iptables was disabled.
nmap connected and identified the service.  Below is a Wireshark capture of the session from the Ubuntu VM.  You can see the establishment of the connection and the tear-down.
Below is a Wireshark capture of a connection with the SSH client itself.  You can see the key exchange and connection establishment.
I then ran the same scan with both the firewall disabled and SSHd disabled.  Below is a Wireshark capture of the session.

Note that when the firewall is disabled and SSHd is disabled a TCP RST is sent back which states that the port is closed.  When repeating the same scan with the firewall enabled the following occurred:


nmap returns that the port is filtered, meaning that the firewall is silently dropping the packets.  The Wireshark capture shows that the Ubuntu VM is not returning any packets back to the scanning host.

I then configured a rule on the firewall to only allow SSH traffic from a specific IP.


After doing that I proceeded to use nmap to spoof the source IP address of the allowed host with the following results:
nmap was able to determine that the host was up and as you can see from the Wireshark captures, the spoofed traffic is reaching the Ubuntu VM and is being received by the scanning VM.  The reason the traffic is being received by the scanning VM is because the ARP entry for the spoofed IP is associated with the MAC address of the scanning VM as shown in the screen shots below.

 The real host associated with the IP

 The ARP cache of the Ubuntu VM 

 The real MAC address of the scanning VM 

nmap is a very powerful tool that usually isn't exploited to it's full potential.  If you're going to use nmap, at minimum read the man page.  If you want to get the most out of it check out the following resources:

Tuesday, August 7, 2012

ESXi Rebuild

While reviewing some material for the Microsoft Exchange 2007 70-236 exam, I rebooted one of my Exchange VMs and was greeted with a blue screen informing me that the registry was corrupted. 

After trying to boot into safe mode and then trying to boot with the last known good configuration option, I tried a system repair using the Windows Server 2008 disc.  I was able to repair one of the hives, however after rebooting I was hit with another blue screen stating that another hive was corrupted. 

After doing some research online, I found via Microsoft's Knowledge base that there might be an issue with the hard drive.  I then remembered that I'd seen some file system errors on some of my Linux VMs and after logging into a few of them, I noted that almost all of them had file systems errors and the file systems were mounted read-only.  I booted the ESXi server off of a Linux USB key and found that one of the hard drives had 50 bad sectors on it.  I tried running fsck from the ESXi command-line but it appeared to be only available if the host itself could not boot and was not available as a general command-line option. 

I figured that since the drive is bad, I might as well buy a new drive, take an image of the existing drive, and then write the image to the new drive.  I opted to use CloneZilla to take the image but ran into the issue with CloneZilla storing the image files locally on the bootable USB drive before copying them to their final destination (a SMB share I'd set up on my desktop). 

I figured that if I installed the new drive, did a clean install of ESXi 5 on it and then added the existing datastore from the old drive, I could just copy the VMs over and then get rid of the old datastore.  After completing the full install I went into the datastore browser and selected all of the VMs and moved them to the new datastore.  This was a mistake.  It caused the host to almost completely lock up to the point where I had to restart the management agents to regain control.  After doing that I was left with partially copied VMs.  When trying to move the vmdk files over to the new datastore ESXi told me that the files already existed there.  When logging in via SSH to verify, I found no trace of the files on second datastore, but it appeared that ESXi some how created a symbolic link between the source and destination during the move process.  I had to end up renaming the offending vmdk files and then pointing the disks in the VMs to the new renamed vmdk files.  I was able to move almost all VMs successfully. One wouldn't boot due to me selecting 'I moved it' instead of 'I copied it' when starting the VM and four others appeared to have been corrupted from the bad sectors. 

After migrating the VMs and verifying they all worked, I then applied the latest ESXi patch to the host.  Now prior to applying the patch I was able to unmount and mount the old datastore in order to verify that none of the VMs depended on any files on it.  However after the patch, I was unable to unmount or delete the datastore because during the patching process a diagnostic partition had been created on the old datastore.  When trying to delete the partition using ESXi command-line, I was unable to do so because it was in use.  Which leads to be believe that VMFS doesn't allow deletion of any files or resources that are in use.  I ended up shutting down the machine and physically disconnecting the drive.  After booting up, the datastore was gone (of course) and I then proceeded to add the previous drive to the new datastore regaining my 1.5TB datastore. 

Key take aways from this were:
1) Don't buy cheap hardware.
2) VMFS doesn't allow for deletion of files that are in use.
3) It is best to install the hypervisor on a seperate physical disk to make disaster recovery easier.

This was quite the ordeal, but also learning experience.