krypted.com

Tiny Deathstars of Foulness

Here’s a little app to sync data from a DynamoDB database to an iOS device. Includes the ability to search. Simply edit the constants file to link it to your source. Enjoy. Screen Shot 2016-06-05 at 5.59.57 PM

June 6th, 2016

Posted In: iPhone

Tags: , , , , , , ,

A SQL JOIN clause combines rows from tables, based on a field shared between them (often a joining or ID field). There are four types of JOINs:
  • INNER JOIN: Show rows when there’s a match in BOTH tables
  • LEFT JOIN: Show rows in the left table with the rows that match up from the right table
  • RIGHT JOIN: Show rows in the right table with rows that match up in the left table
  • FULL JOIN: Show rows with a match in at least one table
In this article, we’ll use the same “Customers” table from our first articles: ID Site Contact Address City Zip Country 1 Krypted Charles Edge my house Minneapolis 55418 US 2 Apple Tim Cook spaceship Cupertino 95014 US 3 Microsoft Satya Nadella campus Redmond 98053 US 4 Facebook Mark Zuckerberg foodhall Menlo Park 94025 US 5 JAMF Dean Hager Grain Exchange Minneapolis 55418 US Let’s look at a selection from the “IPs” table: IPID SiteID IP 101 1 202.76.2.111 102 2 172.67.33.90 103 3 44.75.32.12 104 4 44.75.32.13 105 5 44.75.32.13 In the above two tables, each has their own ID field, with ID being the Customers table ID and IPID being the ID of an IP address. We have a series of IPs for SiteIDs 3-5, with 104 and 105 being the same IP address. Note that the IPs are fake. I made ’em up. Sorry about not using production data, but it didn’t make that much sense… Here, we’ll use a SELECT and identify the fields, defining the table followed by a dot and then the field name as the data to load into memory. We’ll pull that from our initial table and use an INNER JOIN with the second, using an ON to define which field is used to map out results: SELECT IPs.IPID, Customers.ID, IPs.IP FROM IPs INNER JOIN Customers ON IPs.IPID=Customers.ID; We’ll then get something like this, with the three fields that we defined in the SELECT as the columns to display in the output: IPID ID IP 101 1 202.76.2.111 102 2 172.67.33.90 103 3 44.75.32.12 104 4 44.75.32.13 105 5 44.75.32.13 This is a pretty basic JOIN. But shows the power we have. We can then use a LEFT JOIN alongside an ORDER BY: SELECT Customers.Site, IPs.IP FROM Customers LEFT JOIN IPs ON Customers.ID=IPs.IPID ORDER BY Customers.Site; The output would then appear as follows: Site IP Apple 172.67.33.90 Facebook 44.75.32.13 JAMF 44.75.32.13 Krypted 202.76.2.111 Microsoft 44.75.32.12 The different types of JOINs give you the ability to string together some pretty awesome logic, to locate assets in multiple tables and display them. For example, let’s say I also had a Registrar table. I could then also use that ID attribute from our original database as a key to access data connected to our initial table from other tables. Overall, simple and straight forward means of querying data and displaying those results, or leveraging them into a variable or file in order to make use of them elsewhere (e.g. with a script).

February 12th, 2016

Posted In: Uncategorized

Tags: , , , , , , ,

Previously, we looked at the SQL SELECT and WHERE options to obtain output and then constrain what would be displayed in a query. The AND & OR operators are used to add additional logic so you can filter records based on more than one condition. Which is to say to search based on the contents of multiple columns within a single table. AND is used to show information if two different conditions are true and OR is used to show information if either condition is true. Below is a selection from the “Customers” table that showed in our first article an we will use it to run some SQL sorting statements using the ORDER BY keyword: ID Site Contact Address City Zip Country 1 Krypted Charles Edge my house Minneapolis 55418 US 2 Apple Tim Cook spaceship Cupertino 95014 US 3 Microsoft Satya Nadella campus Redmond 98053 US 4 Facebook Mark Zuckerberg foodhall Menlo Park 94025 US 5 JAMF Dean Hager Grain Exchange Minneapolis 55418 US The following example SQL statement outputs customers from the country “US” AND the city “55418”, in the “Customers” table from above: SELECT * FROM Customers WHERE Country='US' AND Zip='55418'; The following SQL statement selects all customers from the city “Minneapolis” OR the city “Cupertino”, in the “Customers” table: SELECT * FROM Customers WHERE City='Cupertino' OR City='Minneapolis'; As the logic of your searches expands, you can combine AND and OR by nesting logic within parenthesis (who said pre-algebra would be useless?!?!). For example, the following SQL statement selects all customers from the country “US” AND the city must be equal to “Minneapolis” OR “Cupertino”, in the “Customers” table: SELECT * FROM Customers WHERE Country='US' AND (City='Minneapolis' OR City='Cupertino');

February 2nd, 2016

Posted In: SQL

Tags: , , , , , , , ,

Previously, we covered the SQL SELECT Statement, to pull data from a SQL database. Here, we’ll constrain our search for items that match a given string, or pattern using the WHERE clause to filter search results, rather than getting all of the records and parsing the output. The WHERE clause extracts records that fulfill a specified string and follows the general syntax as follows, replacing the word column with the name of the column in one of your tables and the word table with the name of a table that you’d like to search within: SQL WHERE Syntax SELECT column,column FROM table WHERE column operator value; Below is a selection from the “Customers” table that showed in our first article an we will use it to run some SQL sorting statements using the ORDER BY keyword: ID Site Contact Address City Zip Country 1 Krypted Charles Edge my house Minneapolis 55418 US 2 Apple Tim Cook spaceship Cupertino 95014 US 3 Microsoft Satya Nadella campus Redmond 98053 US 4 Facebook Mark Zuckerberg foodhall Menlo Park 94025 US 5 JAMF Dean Hager Grain Exchange Minneapolis 55418 US The following SQL statement selects all the customers with the zip code (Zip column) matching “55418”, in the “Customers” table: SELECT * FROM Customers WHERE Zip=55418; In the above search, I didn’t have to quote what I was looking for. The reason is that an integer doesn’t require quoting; however, if we were searching for a name, or any other text record, we should use quotes. So to repeat the search for Site, looking for Krypted, we would use the following: SELECT * FROM Customers WHERE Site=Krypted; We used an = operator, but it’s worth noting that there are a number of others that can be super-helpful. The following operators are available when using a WHERE clause:
    • = Equal
    • <>  or != Not equal to
    • > Greater than
    • IN Indicates multiple potential values for a column
    • < Less than
    • >= Greater than or equal
    • <= Less than or equal
    • BETWEEN Between an inclusive range
    • LIKE Looks for a provided pattern
So using another operator from above, we can also search for all sites that do not contain Krypted, using the following: SELECT * FROM Customers WHERE Site!=Krypted; So far, we’ve looked at searching for exact matches. We can also fuzzy our logic up by looking for items that contain a pattern AND something else, using the LIKE operator in a WHERE clause to search for items that contain part of a pattern in a search. The syntax for a SQL LIKE is similar, it begins with the SELECT statement that we’ve used throughout these articles so far, and then continues with the WHERE and then a LIKE to define the pattern, as follows, replacing column with the name of one of your columns, table with the name of the table that column is in and pattern with the actual search you’re looking to run: SELECT column_name(s) FROM table_name WHERE column_name LIKE pattern; The following SQL statement selects all customers with a City starting with the letter “M” with the % inside a single quote to show where the wildcard data is (in this case, anything that appears after the letter M in a city name): SELECT * FROM Customers WHERE City LIKE 'M%'; The following SQL statement selects all customers with a City containing the pattern “Park (e.g. “Menlo Park”): SELECT * FROM Customers WHERE Country LIKE '%Park%'; You can also add NOT in front of Like to search for records that do not match a pattern (note that you don’t have all the same operators available, so this helps to get a little more logic in searches when needed). The following SQL statement selects all customers with Country NOT containing the pattern “US” (of which there are none in our sample data set): SELECT * FROM Customers WHERE Country NOT LIKE '%US%'; Or if we weren’t sure that we wanted to work in a spaceship, we could search for all addresses that weren’t spaceship: SELECT * FROM Customers WHERE Address NOT LIKE '%spaceship%'; Overall, the SELECT statement has some pretty basic logic and operations, because searching for data within a given table is a pretty straight forward task, you have a pattern, you look for items that match that pattern. As these articles continue, we’ll get into slightly more complex operations, but much of the work that we do with SQL is done after a query with another tool in order to get our data to display or be manipulated in ways that SQL doesn’t do on its own. One of the beauties of SQL, and why it can be so fast, is that it’s simple.

February 1st, 2016

Posted In: SQL

Tags: , , , , , , , , , ,

Corruption happens. Sometimes, it’s little things that cause problems. With MySQL, the mysql command line tool has long held the answer for easy corruption issues. There are a number of tools to repair corruption, but the place to start is the REPAIR command within that trusty mysql command line tool. To start, let’s try a backup. In this case, I’m going to use a tool those of us who deal with Media Assets frequently tinker with, CatDV. I’m going to backup the databases with a simple mysqldump command, defining the user and then piping the data out to some backup file, which in this case is catdvbak on the desktop: mysqldump -u catdvadmin -pcatdv catdv > ~/Desktop/catdvbak If this fails due to corruption then I personally like to stop my databases and back it up flat before I make any changes to it, which a repair command will of course do. Then, we’ll need to tap into mysql: mysql -P 1099 -p Then, we will be in an interactive mysql environment. Let’s just say the auditLog in the catdv database is corrupt. First, select the database: use catdv; Then, repair that table: REPAIR TABLE auditLog; Note: You’ll need to quote things if the name of your table isn’t quite so simple and has special characters. Then try and re-run your backup if it didn’t complete and you should be good to go! If the repair doesn’t go swimmingly, check out myisamchk for more detailed options.

July 26th, 2013

Posted In: Mac OS X, Mac OS X Server, SQL, Xsan

Tags: , , , , , , ,

Mobile Home Directory synchronizing in OS X Server environments is used to synchronize the home folder of clients with a copy that lives on the server, so users can roam between computers with their desktop, documents and preferences following them from machine to machine. Server Side File Tracking creates and keeps a copy of the sync database on client machines and servers, comparing the two databases when synchronizing rather than scanning directories for all the synced files each time a synchronization occurs. In environments with synchronizing Mobile Home Directories, Server Side File Tracking (SSFT) can help reduce the amount of time required for syncs. Server Side File Tracking is disabled by default in OS X Mountain Lion Server and cannot be enabled from the Server app. To enable Server Side File Tracking (aka – FileSyncAgent), use the following command: sudo serveradmin settings info:enableFileSyncAgent = yes To then turn it back off, if you so choose: sudo serveradmin settings info:enableFileSyncAgent = no Logs are then stored in ~/Library/Logs/FileSyncAgent/FileSyncAgentVerbose.log if you need further information. Note that TCP port 2336 needs to be open for the FileSync Agent to connect over ssh on port 2336 to the server; however, ssh doesn’t need to be enabled on the standard port 22 but mobile users must have access to the SSH SACL.

August 16th, 2012

Posted In: Mac OS X, Mac OS X Server, Mass Deployment

Tags: , , , , , , , , , ,

There are a number of ways that you can protect your WordPress site from spam bots. The first is to only allow authenticated users to post comments. Doing so can still be a bit unwieldy, but this feature is built into WordPress and so pretty straight forward to use. Some, who deal with large amounts of spam bots then choose to completely disable the commenting feature outright (Settings -> Discussion -> Uncheck Allow people to post comments on new articles), but comments can still be made on existing articles and commentary is one of the best features of WordPress for many. To stop comments on older articles, also disable commenting on older articles (same page but also choose the Automatically close comments on articles older than option as well). No site should have to disable comments or bend to the will of a spam bot. You can also then choose (same page again) to email the administrator when a comment is made and then choose to not publish comments until the administrator approves them. But spam bots will still attack, and now you’ll just get a ton of junk email. So many will turn to plug-ins for WordPress. There are a few of those that I like a lot. One is called Invisible Defender. Invisible Defender adds a couple of fields that are suppressed using the style sheets. These invisible comment fields, because they’re not displayed to a browser should then never be filled out. Therefore, if a field is filled out, it had to have been done by a bot. Those comments are then automatically blocked. Then there’s the ability to force captcha (shows you funny garbled letters and you type them into a verify field). Captcha for account creation means that all but the most sophisticated bots will fail. This form of forcing an additional form of verification that a visitor is a real human can then be circumvented by users of OpenID, FaceBook and other services, using plug-ins that allow those users to be authenticated through the third party (typically requires a little theme customization). Then there are the antispambee and akismet plug-ins, which look at the actual comments and attempt to determine which ones are spam. These make a good layer of defense but should not be the only layer used. Regrettably, any time you have user generated content on a web site you are going to have automated bots attempting to do a number of things, most likely sell black market pharmaceuticals and other items of questionable origin. There are also bots that attempt to exploit the login page of the WordPress admin (<DOMAIN>/wp-admin.php or /wp-login.php. These are defeated an entirely different way. One of the best strategies is to lock out those who have attempted a number of invalid attempts that exceeds a threshold that you define.  Amongst those is Login Lockdown WordPress Security. Another layer for protecting the administrative side of the site is to add an .htaccess file to provide an additional layer of security on top of WordPress. You can also change the URLs of your login page, which I usually use a plug-in called Stealth Login for. Finally, I like to back up WordPress in an automated fashion. There are a lot of plug-ins to do this, but I’ve always used WordPress Database Backup. Why? Because it works every time I tested it. I haven’t even bothered to test a good backup and restore for another software package because WordPress Database Backup always works, backs up data to another server I have, and it hasn’t failed me yet. I always test the restores of data that I’m backing up and I recommend that you test this (mileage may vary) if you choose to put it into production as well (false senses of security are in many cases worse than no security).

September 28th, 2010

Posted In: WordPress

Tags: , , , , , ,

Recently I did a little article on importing computers into DeployStudio lists. I got an overwhelming number of email requests to go a step further and look at importing computers into DeployStudio from the command line. I’m guessing lots of people want to bolt some middleware onto their mass deployment tools (can’t say I blame ’em). The first thing to know is that DeployStudio stores most everything in standard property lists. This includes workflows, computer groups and computers. When you install DeployStudio you selected a location to place your database. For the purpose of this example, we’re going to use /DSDatabase as our location. Within this directory is a folder called Databases. In this folder you’ll see ByHost, which stores each computer in a property list that is the computers MAC address followed by .plist. For example, if my computer has a MAC address of 00254bcc76fa then 00254bcc76fa.plist would be the file that houses information about my computer in DeployStudio. In the ByHost folder is an additional property list called group.settings.plist. In here is the information about the computer groups that you have created. In the Databases folder there is also a directory called Workflows, which houses all of your workflows. These can be seen in the example folder structure here. When DeployStudio is started, it dynamically loads the items from the property lists to compile its database. You can add additional property lists easily, but when you do you will need to restart DeployStudio each time (or each batch). There are a number of keys in the property lists, with many of which mirroring data that can be imported using the DSImporter.csv template in my previous article. These include:
  • dstudio-auto-disable: Used to disable the DeployStudio Runtime following the initial imaging
  • dstudio-auto-reset-workflow: Used to disable the automate checkbox for the workflow
  • dstudio-auto-started-workflow: Used to assign a workflow. When populating this via the command line you will need to specify the ID of the workflow (remember that the Workflows are in property lists in the Workflows directory. Each has a key for ID, which is what would be used here
  • dstudio-group: Adds the client to a computer group
  • dstudio-host-ard-field-1: Fills in the Info 1 Computer Information field from ARD
  • dstudio-host-ard-field-2: Fills in the Info 2 Computer Information field from ARD
  • dstudio-host-ard-field-3: Fills in the Info 3 Computer Information field from ARD
  • dstudio-host-ard-field-4: Fills in the Info 4 Computer Information field from ARD
  • dstudio-host-interfaces: Arrays to configure static IP addresses for each NIC
  • dstudio-host-new-network-location: Sets up a Location in the Network System Preferences pane
  • dstudio-hostname: Sets the HostName
  • dstudio-mac-addr: The MAC address of the client
To add a computer, you can use plistbuddy or just the defaults command. In the following example, we’ll continue to house our DeployStudio Repository in the /DSDatabase directory and write in only the computer MAC address and whether it will receive a network location, which from what I can tell are the only required keys:
defaults write /DSDatabase/Databases/ByHost/00254bcc76f2 ‘{“dstudio-host-new-network-location” = NO;”dstudio-mac-addr” =”00:25:4b:cc:76:f2″;}’
Next, we’re going to add another key to set the first ARD field for a different computer when it is imaged. Your database (ERP, SIS, etc) kicks off the following script, which also puts the intended user’s Active Directory user name in the first ARD field:
defaults write /DSDatabase/Databases/ByHost/00254bcc53ab ‘{“dstudio-host-new-network-location” = NO;”dstudio-mac-addr” =”00:25:4b:cc:53:ab”;”dstudio-host-ard-field-1″=”cedge882″;}’
Once you have done so, open System Preferences and then click on the DeployStudio preference to restart the DeployStudio Server. You can also restart the DeployStudio Server using launchctl with the com.deploystudio.server daemon. Once restarted, your new computer (or computers if you ran both examples) should be listed in DeployStudio. Happy scripting!

August 9th, 2010

Posted In: Mac OS X, Mac OS X Server, Mass Deployment

Tags: , , , , , , , , ,

Exchange has a maximum database size of 16GB.  You can temporarily increase this if you exceed it by editing the registry.  To do so, open a registry editor (Start -> Run regedit) and browse to this location: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesMSExchangeIS Now, find the name of the server whose database you would like to increase the size of and click on it.  Then, click on either the folder that starts with Public- or Private- according to which you want to increase the size of.  Now add a Reg_DWORD with a name of: Database Size Limit in GB Now set the setting for the limit to 17GB (just type in 17) and reboot the server.

June 2nd, 2005

Posted In: Microsoft Exchange Server

Tags: , , , ,