krypted.com

Tiny Deathstars of Foulness

There’s a macOS tool called AssetCacheLocatorUtil located at /usr/bin/AssetCacheLocatorUtil. The output is in… stderr. Because stderr is so fun to work with (note that sed -i only works with stdin). So, to update the caching server(s) you are using and only print the IP address of those, you’d do the following: /usr/bin/AssetCacheLocatorUtil 2>&1 | grep guid | awk '{print$4}' | sed 's/^\(.*\):.*$/\1/' | uniq If you use Jamf Pro and would like to use this as an extension attribute, that’s posted here: https://github.com/krypted/cachecheck. I didn’t do any of the if/then there, as I’d usually just do that on the JSS.

April 17th, 2017

Posted In: Mac OS X, Mac Security, Mass Deployment, Network Infrastructure, precache

Tags: , , , , , , , , , ,

Recently I was working on a project where we were isolating IP addresses by country. In the process, I found an easy little tool built right into OS X called ip2cc. Using ip2cc, you can lookup what country an IP is in. To do so, simply run ip2cc followed by a name or ip address. For example, to lookup apple.com you might run: ip2cc apple.com Or to lookup Much Music, you might run: ip2cc muchmusic.ca The output would be: IP::Country modules (v2.28) Copyright (c) 2002-13 Nigel Wetters Gourlay Database updated Wed May 15 15:29:48 2013 Name: muchmusic.com Address: 199.85.71.88 Country: CA (Canada) You can just get the country line: ip2cc apple.com | grep Country: To just get the country code: ip2cc apple.com | grep Country: | awk '{ print $2 }' Finally, ip2cc is located at /usr/bin/ip2cc so we’ll complicate things just a tad by replacing the hostname with the current IP (note that private IPs can’t be looked up, so this would only work if you’re rocking on a wan ip or feeding it what a curl from a service like whatismyip brings back): ip2cc `ipconfig getifaddr en0` | grep Country: | awk '{ print $2 }'

December 13th, 2014

Posted In: Mac OS X, Mac OS X Server

Tags: , , , , , , , , , ,

Simple request: Search for all files in a directory and the child directories for a specific pattern and then return the filename without the path to the file. There are a few commandlets we end up needing to use:
  • Get-ChildItem: Creates a recursive array of filenames and pipes that output into the For loop.
  • ForEach-Object: Starts a for loop, looping through the output of the command that has been piped into the loop (much easier than an IFS array IMHO).
  • If: This starts the if pattern that ends after the select-string in the below command, but only dumps the $_.PSPath if the pattern is true.
  • Select-String: Searches for the content in the file.
  • Split-Path: This is the Powershell equivalent of basename and dirname. You can use this commandlet to extract parts of the path to a file. In this case, we’ll use the -Leaf option which effectively runs the basename, or just the file name in the path to a file.
Get-ChildItem -include * -recurse | ForEach-Object { if( ( $(Get-Content $_) | select-string -pattern "Finished processing mailbox") ) { $_.PSPath }} | Split-Path -Leaf You can also search for the files that specifically don’t have that given pattern included in them instead by adding a ! in front of the Get-Content: Get-ChildItem -include * -recurse | ForEach-Object { if( !( $(Get-Content $_) | select-string -pattern "Finished processing mailbox") ) { $_.PSPath }} | Split-Path -Leaf Note: This runs recursively from the existing working directory (and yes, you can use pwd to return a path, just like the bash built-in). Finally, the > operator can then be placed into the end to dump our data to a file: Get-ChildItem -include * -recurse | ForEach-Object { if( !( $(Get-Content $_) | select-string -pattern "Finished processing mailbox") ) { $_.PSPath }} | Split-Path -Leaf > Complete.txt  

April 18th, 2014

Posted In: Active Directory, Microsoft Exchange Server, Windows Server

Tags: , , , , , , , , , , ,

The first thing that loads in OS X is the kernel. The kernel is how users interface with hardware and sets the stage for interaction by probing for each driver that needs to be loaded and tracking what is found. The presence of everything about the system is tracked when the kernel loads as well as pertinent boot parameters. Even if you’re booting in verbose mode, most of this probably happens too fast to notice. You might be able to pause it, but you’re still trying to react to things too quickly in many cases. That’s where the dmesg command comes into play, which lets you review and control the messages since the system has booted. The dmesg command also looks at some diagnostic messages after the system is due booting, such as when a new interface is detected. The dmesg command allows for more than what you can see in console, except explicitly for I/O related kernel messages after boot, rather than all the pesky OS stuff you might otherwise see, such as snarky little “spotlight can’t index this or that” kind of errors. Instead, dmesg output is only a few screens, easily readable that focuses on the system hardware & I/O interfaces. Using dmesg is pretty easy. Even if you’ve su’d, you need to run sudo in front of it each time, but there’s no need for any parameters: sudo dmesg You can use -n for the level of errors or -s to constrain the buffer size, but in Mac OS X you likely don’t need to do either as there’s not a ton of information. If you’re looking for something specific, such as information on an AirPort interface, just grep it out: sudo dmesg | grep AirPort Overall, dmesg is helpful for stuff like this: vnic0: promiscuous mode enable failed Apparently, Rajesh Koothrappali couldn’t find any booze… Or this: USBF: 23952.350 AppleUSBEHCI[0xffffff800b7dc000]::Found a transaction which hasn't moved in 5 seconds on bus 0xfd, timing out! (Addr: 4, EP: 2) I had a bad drobo on the system… Which I might have known by the fact that the Finder froze when plugged in. But I always like to see it in the logs. Swap out one drive, message goes away, drobo loads just fine. Have I mentioned recently I’m liking the low end drobos less and less…

June 13th, 2012

Posted In: Mac OS X, Mac OS X Server, Mac Security, Mass Deployment

Tags: , , , , , , , , , , , ,

Yesterday I showed a way to get the serial number from a Mac OS X machine. However, as a couple of people pointed out, Apple will soon be adding another character to the serial number. This means that rather than use cut I should have used awk to allow for either serial number length. To grab the serial this way:
ioreg -l | grep IOPlatformSerialNumber | awk ‘{print $4}’
Or without the quotes:
ioreg -l | grep IOPlatformSerialNumber | awk ‘{print $4}’ | sed ‘s/”//g’

May 13th, 2010

Posted In: Mac OS X, Mac OS X Server, Mass Deployment

Tags: , , ,

I see a numer of environments that are running routine defragmentation scripts on Xsan volumes. I do not agree with this practice, but given certain edge cases I have watched it happen. When defragmenting a volume, there is no reason to do so to the entire volume. Especially if much of the content is static and not changing very often. And if specific files doesn’t have a lot of extents then they are easily skipped. Let’s look at a couple of quick ways to narrow down your defrag using snfsdefrag. The first is by specifying the path. In this case you would specify a -r option and follow that with the path starting path you want to recursively seek fragmented files. The second is to limit the number of extents in the file. To combine these, let’s assume that we are looking to defragment a folder called Seldon on an Xsan volume called Harry. snfsdefrag -r -m 25 /Volumes/Harry/Seldon You should also build logic into scripts if you are automating the events. For example, you could also use the -c option to just look at how many extents there are and perform the actual defragmentation as part of an if/then only in the event that there are more than a specified threshold. Another example is to check that there isn’t an existing process running in snfsdefrag. Also, if there is then don’t fire up yet another instance:
currentPID=$(ps -ewo pid,user,command | grep snfsdefrag | grep -v grep | cut -d ” ” -f 1) echo The current snfsdefrag PID is ${currentPID} so we are aborting the process. > $logfile
If you insist on automating the defragmentation of an Xsan volume, then there’s lots of other little sanity checks that you can do as well. Oh, you’re backing up, right?

February 13th, 2010

Posted In: Xsan

Tags: , , , , , , , ,

I’ve noticed a couple of occasions where data corruption in Xsan causes a perceived data loss on a volume. This does not always mean that you have to restore from backup. Given the cvfsck output, you can isolate the iNodes using the following:
cat cvfsck.txt | grep *Error* | cut -c 27-36 > iNodeList.txt
Once isolated you can then use the cvfsdb tool to correlate this to file names. For example, if you have an iNode of 0x20643c8 then you can convert this into a file name using the following:
cvfsdb> show inode 0x20643c8
The output will be similar to the following:
000: 0100 8000 3f04 0327 5250 2daa 0000 0000 |….?..’RPL….. 010: 0000 024d 6163 506f 7274 1233 3455 362e |…MyFile-9.6. 020: 302d 2222 2e35 1ca4 656f 7061 7264 2e64 |0-Leopard.d 030: 6d67 0404 084e 5453 4400 0000 0000 0000 |mg…NTSD……. 040: 0000 0000 0000 0000 0000 0000 0000 0000 |……………. 050: 0000 0000 0000 0000 0000 0000 0000 0000 |…………….
The string to the right of the | and between the … characters can then be used to obtain a file name. Using that file name you can then put humpty dumpty back together. If you have a lot of corruption that cvfsck has fixed then you can have a lot of recompiling and therefore would want to automate the task in a script.

February 12th, 2010

Posted In: Xsan

Tags: , , , , , , ,

Recently I’ve been looking at a lot of log files. And sorting through them can be a bit of a pain. However, there are some tools out there to help make this process a bit easier. The first of these is sort. If I have a log that has 1,000 lines, while I like to initially see any lines that are repeated numerous times so that I can see when servers are throwing a lot of errors, combing through them can get tedious. Sort will help to reduce the volume and organize them in a manner that makes sense. For example, to sort the logs and remove duplicate line entries we could use the following:
sort -u /Library/FileSystems/Xsan/data/MYVOLUME/logs/cvlog
The uniq command can also be used to simply remove duplicates. For example, if we’re looking to cat the log without changing the order and then remove unique entries:
cat /Library/FileSystems/Xsan/data/MYVOLUME/logs/cvlog | uniq
If we want to get a little more granular, we can also constrain the output to lines containing a word using grep. For example, if we only want to see the lines that have the word Error in them and we want to remove uniq entries:
cat /Library/FileSystems/Xsan/data/MYVOLUME/logs/cvlog | grep Error | uniq
There are also times when we only want to see lines that are not repeated, so to leverage sort and only look at lines that appear once:
cat /Library/FileSystems/Xsan/data/MYVOLUME/logs/cvlog | uniq -u
The volume of a given error can be indicative of some issues. To look at lines that repeated a certain number of times prefixed with the number of times they were shown:
sort /Library/FileSystems/Xsan/data/MYVOLUME/logs/cvlog | uniq -c
Now, uniqueness can totally be thrown off by the date and time stamp typically prefixed to a line entry in a log. Therefore, the -n and +n options of uniq will help to get that out of the way. Basically, use them to ignore a number of fields and characters respectively. For example, let’s say that we wanted to look at lines that appear only once, but we also wanted to ignore the first 14 characters in the line as they were strictly used for date and time stamps. Then we could use:
cat /Library/FileSystems/Xsan/data/MYVOLUME/logs/cvlog | uniq +n 14
If you haven’t started to leverage sort and uniq in your log perusal then getting started may seem like it takes longer to figure out the commands than it takes to go through the logs. But keep in mind that the more you do it the faster you get. And the faster you get zipping through those logs the more quickly you can restore service, triangulate corruption and most importantly go surfin’. Happy huntin’.

February 10th, 2010

Posted In: Mac OS X, Ubuntu, Unix, Xsan

Tags: , , , , , , ,

I have been known to occasionally ask what build number of Mac OS X that someone is using. The sw_vers command can be used to obtain this. Simply run:
sw_vers
And the BuildVersion will be listed. Or just to get the BuildVersion:
sw_vers | grep BuildVersion
Or to just get the number (useful in scripts that catalog such a thing:
sw_vers | grep BuildVersion | cut -c 15-21
As one comment just indicated, you could also just use `sw_vers -buildVersion`. I guess I should review these commands every operating system or 4… Thanks Allen.

February 4th, 2010

Posted In: Mac OS X, Mass Deployment

Tags: , , , , ,

A number of commands available for finding positions that you want in a line and extracting only a certain amount of text can be pretty cumbersome in terms of learning curve. This isn’t to say that once you get the hang of them that they’re terribly complicated but it can take a little while to get the hang of them. And when you need something fast, you might want an easy command for extracting text from lines. In these cases, consider cut. The cut command doesn’t do regular expressions (I guess you could argue that its ability to use a delimiter can be used as a regular expression) and so it’s really easy to use. Basically, you feed cut some data and then tell it which characters in the line that you want to keep. It then gets rid of the rest. The easiest use of this is to look at a list of data. For example, let’s saw we have a file called test.txt with the following contents:
abc123 abc124 abc134 abc234 abd234 acd234
Now we’re going to cat the file (which just reads the file contents) and then pipe the output of reading that file into a cut command (which is done by simply adding a pipe character at the end of the first part of the command. Then we’re going to use the -c option of cut (which looks at character positions) to simply grab the first three positions (1-3) of the lines. The command would end up looking as follows:
cat test.txt | cut -c 1-3
And the output would look as follows (this output could then be redirected into a new file btw):
abc abc abc abc abd acd
You can also specify multiple ranges of characters (or single characters for that matter). For example, to see only characters 1-2 and 5-6:
cat test.txt | cut -c 1-2,5-6
Overall, cut is a very easy to use tool, with a limitation that your pattern that you are looking to maintain must be consistent in terms of the character position that you are using in each line. It also uses every line in a file; however, to go another step and look for all positions in a line only if the line has a pattern that it can match you could simply add a grep in the middle. For example, if you’re looking for each line of our sample text file that has the number 4 then you could do:
cat test.txt | grep 4
This would show you only the last five lines of the file since those are the only lines that have that number in them. You could then pipe the output of that file into your cut and, let’s say, look for characters 1-3 and 6 in the output:
cat test.txt | grep 4 | cut -c 1-3,6
Your result would then be the following:
abc4 abc4 abc4 abd4 acd4
Finally, there are going to be times when you’re not looking for a specific character position in a line but instead a character position or a pattern that begins with another pattern. For this you’re going to end up needing to use a more advanced tool, such as awk or if you’re feelin’ frisky (maybe I’m speaking for myself there) regex. These tools will have a steeper learning curve, but ultimately be far more useful.

January 17th, 2010

Posted In: Mac OS X, Mac OS X Server, Mac Security, Unix

Tags: , , , , , , , , ,

Next Page »