Monday, October 31, 2011

How to Enable Ping Response on Windows 7

I've decided to chronicle some of the common issues that users, developers, architects, and system administrators face. This article is the first in a series that will address some of the common complaints.

One common complaint for many home users is the inability to ping a new system with Windows 7 or Windows Server 2008 R2. This occurs on most systems and the network selection typically does not matter. I have set up a virtualized test Windows 7 system on my Hyper-V server and connected it to a network. To relax the initial security of the system, I even specified a "Home" network (out of Home, Work, Public, or Domain). The first thing to note is that this network is fairly locked down and even though I specified "Home" during the setup, Windows' inability to identify the network through network discovery leads it to select the "Public" network by default. Ping to this system times out with "request timed out."



This is a feature of the latest version of Windows Firewall. For whatever reason, Microsoft decided that a ping response (in the default set of rules) should be part of file and printer sharing. It is disabled by default  (as it should be) and a rule does not exist for Unidentified/Public networks:



To create the rule for allowing ping on a "Public" network, it is possible to use the "New Rule" wizard in "Windows Firewall with Advanced Security." This is accessed by navigating to start -> Control Panel -> Administrative Tools -> Windows Firewall with Advanced Security.



Rule Type: Select the type of firewall rule to create. Select Custom Rule:



Specify All Programs on the next step.

For Protocols and Ports, select ICMPv4 and customize the rule to allow ICMP Echo





Select the appropriate scope, for most users Any/Any will work.



Allow the connection and apply to all network profiles. After completing the wizard, the system should be able to receive a ping,



A couple of things should probably be said about security and enabling ping response. From a security perspective, enabling ICMP responses causes your system to be more visible on all networks. For home users who do not connect to public networks, the additional security risk is minimal. For users on public networks, this feature may allow malicious users to find your system more easily, but may not increase the risk of compromise significantly as long as other security best practices are followed.

Have an idea for something that you'd like to see explored? Leave a comment or send an e-mail to razorbackx_at_gmail<dot>com

Monday, October 24, 2011

Windows Phone Wifi Annoyance Solved in 7.5

"Your phone couldn't reach the network"

I received this error when I tried connecting my new HTC Trophy to my home network that is secured with WPA2. I normally wouldn't care, but Verizon has stopped offering unlimited data plans, so I decided to supplement the 2GB monthly limit with Wifi  at home and at work. This error persisted even after the first update I applied, but yesterday after installing the update to Windows Phone 7.5 (Mango), everything is able to connect and I can access secured networks without any issues. So if you are having similar difficulties, apply the latest updates...

Saturday, October 8, 2011

No-SAN Failover Clustering With a Single Hyper-V Host

Background

"Look Ma, No SAN..."

Occasionally the need arises to demonstrate the functionality of a cluster that is built using Microsoft Clustering Service (MSCS) or Failover Clustering (new name in 2008/2008 R2) utilizing an environment without a SAN. This might arise from needing to demonstrate basic functionality to a client (using a reasonably powerful laptop/projector) or performing a specialized test case against a certain set of conditions or cluster/machine state. This is also useful for demonstrating proof-of-concept failover/recovery scenarios.  Although this solution may be useful for creating failover nodes on multiple Hyper-V host systems, my intention is to show how a clustered solution can be built using a single Hyper-V node.

Note that this does not create a highly available or fault tolerant solution because the VMs are hosted on a single Hyper-V host on a single piece of hardware. The intention is to use the environment for a specific test (possibly involving a patch or new software release) or in a consulting/sales environment to demonstrate the functionality of a product or provide a proof of concept prior to implementation. Additionally, this does not create a clustered VM that is compatible with other Hyper-V 2008 R2 features like Live Migration.

Although I create this solution using Hyper-V, the same solution can be easily implemented on any other virtualization platform that supports guest networking (VMWare, Citrix XenServer, etc).

One of the motivations behind this post is that a virtual disk in Hyper-V can only be mapped to a single Hyper-V guest. Trying to map a single disk to multiple hosts gives the following error: "Error Applying Hard Drive Changes: Failed to add device ' Microsoft Virtual Hard Disk', Attachment <Path> failed to open because of this error: 'The process cannot access the file because it is being used by another process.'"



Since a MSCS/Failover cluster needs shared storage, this requires a centralized network-based storage solution. Most SANs are expensive (and don't fit in a typical laptop bag). Solutions on other platforms exist (such as OpenFiler), but on the Microsoft platform, enter the Microsoft iSCSI Software Target. This is a tool that is currently only compatible with Windows Server 2008 R2 and Windows Server 2008 R2 SP1 and allows the creation of a virtual disk that acts as an iSCSI target for other initiators (VMs and non-VMs).

Designing the Solution



To demonstrate the solution, I designed/implemented a cluster using Microsoft SQL Server 2008 R2 on two Windows Server 2008 R2 VMs. I also built a DC that also acts as a storage server for the iSCSI targets and provides a domain environment. This fits my purposes for this post because I typically build a new environment for each group of posts that I write and it allows me to build smaller specific-purpose demonstration environments using less hardware. I also don't need to install any software or add any unnecessary roles/features to the Hyper-V host.

From an architectural perspective, placing the shared storage on the Hyper-V host instead of a guest will provide the best performance because the overhead of accessing the storage through the host will be less than accessing the storage through the VM (and subsequently the host). Otherwise, resource allocations involving memory and CPU should provide the VMs with sufficient resources to run the hosted application(s) with sufficient performance (whatever that means in your specific case).

Implementing the Solution

Implementation of the clustered SQL Instance involves three main steps:
  • Initially configure VMs and perform iSCSI configuration
  • Establish the Failover cluster with the nodes
  • Perform a SQL cluster preparation/configuration on the first node and join the second node to the established cluster
Note, this post highlights the key points of the setup. Many specific configuration steps are skipped for brevity (for more information on configuring SQL clusters, see this link).

To implement, install the iSCSI software target on the storage server.



Then create the virtual disks that the iSCSI initiators will access:



To set up access to the volumes, I usually use iSCSI Qualified Names (IQNs). The easiest way to get these to appear is to configure the iSCSI client on the initiating hosts and scan the IP address of the storage server. Verify via the Discovery tab that the storage server is being used to discover possible targets.



Now, back on the storage server, configure the targets and associate them with the volumes. Note that when it comes time to add the IQNs for the target, use the "advanced" button to add multiple IQNs.



After the targets for the Quorum drive and the SQL volume have been created, add the targets to the volume configuration.



Back on the servers, connect to the targets and configure devices.




At this point, set up the filesystems on the two volumes, otherwise the cluster will be created as a node majority cluster and a single node failing will cause the cluster to fail. If you forget to do this, you can use the "Configure Cluster Quorum Wizard" later on and configure the shared disk resources manually.

Now the cluster setup and SQL configuration can begin. Ensure the Failover Clustering feature is added through server manager, and add the two virtual SQL nodes to the cluster.



Once the cluster is created and validated, the next step is to prepare/install the SQL cluster.



After getting through the preparation and completion wizards, a single node cluster on the first node is set up. To add the second node, complete the "Add  node to a SQL Server failover cluster" wizard on the second node.


Verification of the Solution

Verification of the solution is fairly straightforward. There are immediately three main things to test with the clustered solution:
  • Connection to the SQL Instance virtual hostname/IP via SQL Server Management Studio
  • Failover of SQL Cluster resources on node failure (simulated by hard power off of one of the VMs)
  • Automatic re-establishment of connection to the cluster after the connection is dropped due to failover
Connect via SSMS to the clustered instance:



Run a query and fail the primary node...



Watch the failover to the secondary node.




Next, verify via SSMS that a query can be completed after failover.



And that concludes a demonstration of SQL clustering. This setup was configured on a single Hyper-V host with a virtual two-node SQL cluster. The back end storage could not be supplied using VHDs in Hyper-V, so the Microsoft iSCSI software target was used to create shared storage for the cluster nodes.