Tiny Deathstars of Foulness

21-14.  Florida Gators top Bulldogs, sadly.  6-3 is still basically bowl bound.  Kentucky coming up next.

October 29th, 2006

Posted In: Football

Tags: , , , ,

Outlook Duplicate Items Remover (ODIR) is a great tool for removing duplicate items from Outlook.

October 28th, 2006

Posted In: Microsoft Exchange Server

Tags: , ,

I originally posted this at The amount of data used by Small Businesses is on target to rise 30% to 35% in 2006. Sarbanes-Oxley, HIPPA and SEC Rule 17a-4 have introduced new regulations on the length of time data must be kept and in what format. Not only must data be kept, it must be backed up and secured. These factors have the cost of data storage for the Small Business increasing exponentially. Corporations valued at more than 75 million dollars are generating 1.6 billion gigabytes of data per year. Small and medium sized companies can reap the benefits of developments being made with larger corporations. Different methods and classifications for data are one of these. Information Lifecycle Management (ILM) is a process for maximizing information availability and data protection while minimizing cost. It is a strategy for aligning your IT infrastructure with the needs of your business based on the value of data. Administrators must analyze the trade-offs between cost and availability of data in tiers by differentiating production or transactional data from reference or fixed content data. ILM includes the policies, practices, services and tools used to align business practices with the most appropriate and cost-effective data structures. Once data has been classified into tiers then storage methods can be chosen that are in line with the business needs of each organization. The policies to govern these practices need to be clearly documented in order to keep everyone working towards the same goals. Storage Classification Online storage is highly available with fast and redundant drives. The XRAID and XSAN are considered online storage, which is best used for production data as it is dynamic in nature. This can include current projects and financial data. This data must be backed up often and be rapidly restored in the event of a loss. It is not uncommon to use an XRAID to backup another XRAID for immediate restoration of files and a Tape Library to maintain offsite backups of the XRAID. Offline storage is used for data retained for long periods of time and rarely accessed. Data often found on offline media includes old projects and archived email. Media used for offline storage is often the same as media used for backup such as tape drives and Optical media. When referring to offline storage we refer to archives, not backups. Archives are typically static whereas backups are typically dynamically changed with each backup. Offline storage still needs to be redundant or backed up, but the schedules for backup are often more lax than with that of other classifications of storage. In a Small or Medium Sized company offline media is often backed up, or duplicated, to the same type of media that it is housed on. There may be two copies of a tape (one onsite and one offsite) or two copies of DVD’s that the data has been burned onto, with each copy stored in a different physical location. Near-line storage bridges the gap between online and offline storage by providing faster data access than archival storage at a lower cost than primary storage. Firewire Drives are often considered near-line storage because they are slower and usually not redundant. Near-line can refer to recent projects, old financial data, office forms that are updated rarely and backups of online storage to be made readily available for rapid recovery. Backup of Near-line storage will probably be to tape. Data Classification Mission Critical data is typically stored in online storage. This data is the day-to-day production data that drives information-based businesses. This includes the jobs being worked on by designers, the video being edited for commercials and movies, accounting data, legal data (for law firms) and current items within an organizations groupware system. For the small business, Vital and Sensitive data are often one and the same. Vital data is data that is used in normal business practices but can be down for minutes or longer. Sensitive data is often accounting data that a company can live without for a short period of time, but will need to be restored in the event of a loss in a short amount of time. Small business will typically keep Vital and Sensitive data on the same type of media but may have different backup policies for it. For example, a company may choose to encrypt sensitive data and not vital data. Non-Critical data includes items such as digital records and personal data files of network users. Non-Critical data could also include a duplicate of Mission Critical data from online storage. Non-Critical data often resides on near-line or off-line media (as is the case with Email archives). Non-critical data primarily refers to data kept as part of a companies risk management strategy or for regulatory compliance. This includes old emails and financial records and others. Classification Methods The chronological method for classifying data is often one of the easiest and most logical. For example, a design firm may keep their mission critical current jobs on an Xraid, vital jobs less than three months old on a Firewire drive attached to a server and non-critical jobs older than three months on backup tapes or offline Firewire drives. It would not be possible to implement this classification without having the data organized into jobs first. Another way to look at this method is that data over 180 days old automatically gets archived. This characteristic method of data organization means that data with certain characteristics can be archived. This can applied to accounting and legal firms. Whether a client is active or not simply represents a characteristic. If a type of clothing is in style or not represents another possible characteristic. Provided that data is arranged or labeled by characteristic, it is possible to archive using a certain characteristic as a variable or metadata. Many small and medium sized companies are not using metadata for files yet, so a good substitution can be using a file name to denote attributes of the files data. The hierarchical method of data organization means that files or folders within certain areas of the file system can be archived. For example, if a company decides to close down their Music Supervision department then the data stored in the Music Supervision share point on the server could be archived. Service Level Agreements The final piece of the ILM puzzle is building a Service Level Agreement for data management within a company. This is where the people that use each type of data within an organization sit down with IT and define how readily available that data needs to be and how often that data needs to be backed up. In a Small Business it is often the owners of companies that make this decision. In many ways, this makes coming to terms with a Service Level Agreement easier than in a larger organization. The owner of a small business is more likely to have a picture of what the data can cost the company. When given the cost difference between online and near-line storage, small business owners are more likely to make concessions easier than managers of larger organizations who do not have as much of an ownership mentality towards a company. Building a good Service Level Agreement means answering questions about the data, asked per classification. Some of the most important questions are: How much data is there?How readily available does the data need to be?How much does this cost the company, including backups? Given the type of storage used to house this data, how much is it costing the company? If nearly half the data can be moved to near-line storage what will the savings be to the company? In the event of a loss, how far back in time is the company willing to go for retrieval? Is the data required it to be in an inalterable format for regulatory purposes? How fast must data be restored in the event of a loss? How fast must data be restored in the event of a catastrophe? Will client systems be backed up? If so, what on each client system will be backed up? Information Lifecycle Management Most companies will use a combination of methods to determine their data classification. Each classification should be mapped to a type of storage by building a SLA. Once this is done software programs such as BRU or Retrospect can be configured for automated archival and backups. The backup/archival software chosen will be the component that implements the SLA, so should fill the requirement of the ILM policies put into place. The schedules for archival and backups should be set in accordance with the businesses needs. Some companies may choose to keep the same data in online storage for longer than other companies in the same business because they have invested more in online storage or because they reference the data often for other projects. The business logic of the organization will drive the schedule using the SLA as a roadmap. Setting schedules means having documentation for what lives where and for how long. Information Lifecycle Management means bringing the actual data locations inline with where the data needs to be. Once this has been done, the cost to house and back up data becomes more quantifiable and cost efficient. The SLA is meant to be a guideline and should be revisited at roadblocks and intervals along the way. Checks and balances should be put into place to ensure that the actual data management situation accurately reflects the SLA. ILM and regulatory compliance are more about people and business process than about required technology changes. The lifecycle of data is important to understand. As storage requirements spiral out of control, administrators of small and medium sized organizations can look to the methods of Enterprise networking for handling storage requirements with scalability and flexibility.

October 26th, 2006

Posted In: Xsan

Tags: , , , , , ,

Do you want to run Software Update Services through a proxy server? In the /System/Library/LaunchDaemons/ file you can add the following (if your proxy were <key>EnvironmentVariables</key> <dict> <key>http_proxy</key> <string></string> </dict>

October 25th, 2006

Posted In: Mac OS X Server, Mac Security

27-24.  Doesn’t bode well for a trip to Jacksonville.

October 22nd, 2006

Posted In: Football

Tags: , , , ,

Is it me or are all the people here just totally loaded.  I guess sometimes it’s all about who ya’ meet…  But I think the weather here is nicer than it is in LA…  BTW, Archie, sorry you had to sleep in the garage…

October 21st, 2006

Posted In: On the Road

Tags: ,

I originally posted this at By default the global permissions for new files written into an Xsan volume are 644 (rw-r–r–). This can result in a permissions problem where one user can read another user’s posted items, but not make changes to it.  This can be resolved by changing the default umask value for groups. It’s a simple command line: sudo defaults write -g NSUmask 23 In this case, the 23 is a decimal equivalent of “rw-”. So the result of running this command line is that files posted to the shared volume will have 664 permissions (rw-rw-r–), allowing other users in the group to modify the files. Note that this command must be run on all machines accessing the Xsan shared volume (it cannot be applied globally from the Xsan controller).

October 20th, 2006

Posted In: Xsan

Tags: , , ,

I originally posted this at Introduction Xsan requires a dedicated ethernet network in the supported architecture by Apple. For systems that are obtaining directory information or need to be wired into the corporate network of many organizations this can cause issues. Namely that Xsan will attempt to use the corporate network for connectivity with clients. We see this in many configurations and it can cause dropped packets, unmountable volumes and other intermittent issues. One way to fix this for metadata controllers is to choose the network adapter that you would like to use on the metadata network in Server Admin. This can be done by: Open Xsan Admin Click on the Xsan listed under SAN Components Clicking on Setup Click on Computers Click on the metadata controller Choose the network controller you would like to connect using This doesn’t always work, and selecting a network controller to use is not an available option with Xsan clients and older versions of the Xsan software. A fix to get around this issue for these systems is to block metadata traffic on the production network interface. Here is an example of how to do this using ipfw. This example assumes that you are using Tiger Server for your metadata controller. Once we step through this using Server Admin we will explain the same thing using the configuration files which should help users of old versions of Mac OS X Server or of Mac OS X Workstation. Through this entire article we are going to assume that the IP address of our metadata controllers production IP is The IP address of our metadata controllers metadata IP is If metadata goes over the 192.168.50.x network then it will likely cause issues, so our goal throughout is going to be blocking metadata traffic on There are also many ways to do this. We are going to focus on blocking outgoing traffic for Xsan on the production IP. Another way to accomplish this might be to block all traffic for metadata for the network range of 192.168.50.x. There are many different ways to do this (including using managed switches) but this way has been working for us. DISCLAIMER : Be very careful playing with the firewall on a headless server. The last thing you want to do is block yourself from being able to ARD or SSH into a system you are trying to administer. Also try to make sure that you do this stuff while you have the opportunity for a little down-time as you might need a little time to troubleshoot. Also, be sure to backup the service settings before making any changes. To do this, drag the small icon in the lower right-hand corner of the firewall service screen to the desktoop. When you see the + icon let go of the icon and it will save the service settings. If you need to restore this later you can drag the file back over to the Server Admin screen. Using Firewall in Server Admin First we’re going to enable Firewall (ipfw): Open Server Admin from /Applications/Server Authenticate to the server you are configuring Click on Firewall under the Computers and Servers List Click on Settings Click on Services Click on the Start Service button in the toolbar of Server Admin Now we’re going to define the ports to block (see Figure 1): Click on the + sign to add a service Enter Xsan under the Name field Enter “536, 537” (without the quotes) under the Port field Leave the Protocol set to TCP Click on OK Figure 1 – Defining Ports Now we’re going to define the addresses to block this port for (see Figure 2): Click on Address Groups Click on the + sign to add an address group Name the address group Production IP Click on the + sign beside Addresses in group Remove Any if it is present Type the IP address of your internal IP for the production network Click OK Figure 2 – Creating the Address Group Now we’re going to create the rule for blocking the Xsan ports (see Figure 3): Click on the Advanced tab Uncheck all of the rules that deny if you want to first test your base config (you can always go back and add rules once you’ve become more familiar with the Firewall service) Click on the + button In the Action drop-down choose the Deny option In the Protocol drop-down choose the TCP option In the Service drop-down choose the newly created Xsan service In the Address drop-down choose the newly created Production IP address In the Destination drop-down select Any The Ports should read 536, 537 as this is what we defined earlier In the Interface drop-down choose Out Click OK Figure 3 – Configuring the Rule Using the IPFW from the Command Line Personally I find it much easier to do most of this using the command line. For systems not running Mac OS X Server you will need to use the command-line. To do this you would use the ipfw.conf file that is created to define what types of traffic are allowed. The ipfw.conf file is located at /private/etc/ipfilter/ipfw.conf. The default ipfw.conf file looks like something like this: # ipfw.conf.default – Installed by Apple, never modified by Server Admin app # # ipfw.conf – The servermgrd process (the back end of Server Admin app) # creates this from ipfw.conf.default if it’s absent, but does not modify it. # # Administrators can place custom ipfw rules in ipfw.conf. # # Whenever a change is made to the ipfw rules by the Server Admin application and saved: # 1. All ipfw rules are flushed # 2. The rules defined by the Server Admin app (stored as plists) are exported to # /etc/ipfilter/ and loaded into the firewall via ipfw. # 3. The rules in /etc/ipfilter/ipfw.conf are loaded into the firewall via ipfw. # Note that the rules loaded into the firewall are not applied unless the firewall is enabled. # # The rules resulting from the Server Admin app’s IPFirewall and NAT panels are numbered: # 10 – from the NAT Service – this is the NAT divert rule, present only # when he NAT service is started via the Server Admin app. # 1000 – from the “Advanced” panel – the modifiable rules, ordered by their # relative position in the drag-sortable rule list # 12300 – from the “General” panel – “allow”” rules that punch specific holes # in the firewall for specific services # 63200 – from the “Advanced” panel – the non-modifiable rules at the bottom # of the panel’s rule list # # Refer to the man page for ipfw(8) for more information. # # The following rules are already added by default: # #add 01000 allow all from any to any via en0 #add 01010 deny all from any to #add 01020 deny ip from to any in #add 01030 deny tcp from any to in #add 12300 (”allow” rules from the “General” panel) #… #add 65534 deny ip from any to any First, we’re going to allow all traffic for both controllers. For this example doing this will make sure that we don’t have any problems with connecting to Xsan Admin (port 311) or ARD. To do this, take out the # for the line that reads: #add 01000 allow all from any to any via en0 By removing a # from the beginning of a line you are uncommenting the line or enabling the rule. I like to go through and put a commented line for some rules that are complicated so that other techs at my company can figure out what they do if they need to troubleshoot something that I’ve done. To do this you just begin the line with a #. Also the en0 here might be en1 or lo1 for some metadata controllers. You can use the Network Utility to determine which adapter is using which ethernet port name. Next you will create another rule that adds a line that denies outgoing traffic for ports 536 and 537 (Sun Grid Engine Qmaster) over the ethernet adapter being used for the metadata network. This adapter can easily be identified using the Network System Preference. This rule looks something like this assuming that is the IP address the server is using for its metadata network interface: add 65534 deny tcp from to any dst-port 536, 537 out Reading from left to right we are telling ipfw to add a rule with a unique ID that denies TCP traffic from the IP address of the metadata network interface that is running on ports 536 or 537 for outgoing traffic. The unique ID number also acts as a priority that rules should be run in. A full sample file could be as short as: #The following line enables network traffic on the Production Network add 01000 allow all from any to any via en0 #The following line enables network traffic on the Metadata Network add 01000 allow all from any to any via en1 #The following line disables Xsan Traffic for the Production Network add 65534 deny tcp from to any dst-port 536, 537 out Finally, once you are sure that your configuration is good, use Server Admin to enable the Firewall service as we described at the beginning of this article. Remember not to block port 311 or you will not be able to use Xsan Admin to administer the client. Just in case you use Linux clients For Linux clients of Xsan I find that using iptables is a great way to accomplish this same task. The two commands to create the deny rule using iptables would be something like this assuming that eth0 is your production network: Iptables –A PREROUTING –o eth0 -s any –p tcp –dport 536 -j DROP Iptables –A PREROUTING –o eth0 -s any –p tcp –dport 537 -j DROP If iptables is not already started you can type: Service iptables start The command to view active rules for iptables is: Chkconfig –list iptables Saving Windows for Last Windows XP and Server 2003 have equally as robust firewall features. To limit traffic over ports you would use Windows Firewall for Windows XP and Routing and Remote Access for Windows Server 2003. While Windows Server 2003 is a little beyond the scope of this document, I can help you get started. For Windows XP: Click on Start Click on Connect To Click on Show All Network Connections Click on Change Windows Firewall Settings (see Figure 4) Figure 4 – Network Connections Once you have your Windows Firewall control panel open, click On and make sure that the Don’t allow exceptions check-box is unchecked (see Figure 5) Click on Advanced Use the check-boxes under Network Connection Settings to disable the Firewall for the Metadata Controller (see Figure 6) Figure 5 – Enabling Windows Firewall Figure 6 – Windows Firewall Advanced Settings

October 16th, 2006

Posted In: Xsan

Tags: , ,

From 5-0 to 5-2 two weeks later is tough.  Especially when your second loss comes from Vandy.  That QB doesn’t look half bad, but I till hate to loose to the bottom of the conference…

October 15th, 2006

Posted In: Football

Tags: , , , ,

A regular expression “engine” is a piece of software that can process regular expressions, trying to match the pattern to the given string. Usually, the engine is part of a larger application and you do not access the engine directly. Rather, the application will invoke it for you when needed, making sure the right regular expression is applied to the right file or data.

October 14th, 2006

Posted In: Uncategorized

Next Page »