Pages

Friday, December 4, 2009

Random Empty Print Jobs Sent to Network Printers


I recently came across an odd problem having to do with empty print jobs being sent to printers across our network.  Since our printers are all configured to first print a cover sheet with the user's username before each job, this would result in dozens of coversheets being printed on various printers throughout our company.  After a closer inspection, the jobs being spooled were all named "Remote Desktop Redirected Printer Doc" -- so I knew that Remote Desktop had something to do with it at least.

At some point in the troubleshooting process I found that I could recreate the problem at will (which is gold in the technical troubleshooting world).  All I had to do was start a remote desktop session from a computer with network printers installed, to an XP computer on our network.  I also had to choose the option to redirect local printers to the remote session for it to occur as well.  Whichever network printers were installed on the local machine, and were redirected to the remote session, would receive a random amount of these empty print jobs.

After scouring the internet for awhile, and involving Microsoft support, I discovered that our problem was related to a little 3rd party application called Scan2PC.exe -- which was installed with some drivers to a Dell multifunction printer that all of our executives have.  So, if we started a remote desktop session to any XP computer that had this application installed, the problem happened.  As soon as I removed the Dell drivers, the problem went away.  Since our executives aren't planning getting rid of their brand new multifunction devices anytime soon, our current work around is to uncheck the box in the RDP session to redirect local printers to the remote session.

It's crazy how such a small thing can cause so many weird things to happen!

Tuesday, November 3, 2009

My Fake Droid Review


Disclaimer: This post is a joke based on a gift that I received this week from my ever-so creative wife. If you are looking for the latest and greatest news regarding the Motorola Droid, coming out on Verizon this Friday, you will need to look elsewhere :) With that said, let's have some fun!

Ever since I heard about the Droid, I've pretty much been obsessed with getting more information about it. I've been following any Android news website I can find, and also do daily searches on Google news. I am currently a Verizon customer and have long awaited the day when they present their customers with a truly "open" device that hasn't been gimped. Well my friends, I believe Friday is that day.

In any case, if I do end up getting the Droid it will partially be because of the mercy of my wife. I just used my new-every-two upgrade this summer to get the enV3 by LG (a great phone by the way), so I'm not eligible for a new phone discount anytime soon. However, my lovely wife, is a bit overdue for an upgrade. So, I had to convince her that she does in fact want my enV3 and really does want to let me use her upgrade. Well, sometime after having that talk with her, she decided to make a little gift for me: her take on the droid without seeing any pictures or reading any reviews. All she had to go on was things she may have heard me say and things she knew about technology in general -- and mind you, my wife is not a techie.

So, the only thing not self-explanatory in the picture above is the antenna where it says: "Perfect Reception Always". Here are the rest of the pictures (unfortunately when she tried to write on the tape holding it all together, it didn't stay, so I'll clarify when necessary):















In case you can't tell, the top half says "Free Navigation" and the bottom has "USB", "Ear Phones" and "Dial-up Modem" (Love that one!).















I couldn't stop laughing when I saw the "Jedi Button" -- who KNOWS what kind of power will be unleashed when I press it!
















Here you have the "Hi-tech camera" and "Google Maps".















The "Cool Google easy Keyboard" -- I'm not sure what some of those symbols are though :)

All in all, a pretty good device -- don't ya think?

Tuesday, October 13, 2009

Wyse Device Manager Agent Disabled

In our thin client implementation we have decided on going with Wyse.  We're using the Wyse OS based V10Ls for standard users and then Windows XPe based R90Ls for users that require special setups.  Managing the V10Ls could not be easier: set up an FTP server with one config file and put the thin client in DHCP.  The rest is magic.  However, with the R90Ls we needed a way to customize the factory XPe image and deploy it to the rest of the devices.  We initially looked into using Altiris for imaging, since we were already using it for some of our servers, but soon discovered that it would cost us about $5,000 in licensing to image the rest of our thin clients.

Enter Wyse Device Manager (WDM).  This free thin client management solution from Wyse is great for keeping an inventory of your thin client devices, and can also be used for things like remote control and imaging.  The process of taking an image of a thin client is fairly simple -- customize the current image to your liking, run a built-in prep script on the client (so that the device maintains its "uniqueness") and then take the image using the WDM interface.

As I started creating our XPe images, I noticed something.  After every time we took an image, the WDM Agent service was stopped and disabled on the thin client (called the HAgent service in the Services tool).  No matter what I did, this service would be disabled causing the device to no longer be in communication with the WDM server.  Obviously this defeats the purpose since going forward we would not be able to remotely control or image this thin client.

After doing some digging, I discovered what the issue was: a typo in the built-in prep script.  In prep_4_man.bat in C:\Windows\Setup there was a line that read:
regedit /s /i c:\windows\setup\isetup.reg
But there was no file named isetup.reg in C:\Windows\Setup -- but there WAS a file named setup.reg.
I simply removed the extra "i" from the line, and that triggered the train of scripts that run post-imaging that cause the thin client to establish itself and enable the WDM Agent service.

I'm not sure how many people are building custom XPe images with Wyse thin clients -- but if you're using the latest XPe build from Wyse, this should help!

Wednesday, September 16, 2009

Users Unable to Change Expired Passwords on Windows Server 2008

In our environment, we have Citrix XenApp 5.0 publishing desktops from Windows Server 2008.  Our users connect to these published desktops via thin client or through the web interface.  We recently decided to put our password policy into effect, which included expiring user passwords once a month.

When the first user experienced the password expiration interface in Server 2008 after coming back to a locked workstation, they received the following message:
"The password for this account has expired. To change the password, click Cancel, click Switch User, and then log on."

However, there was no cancel button to click on, and no apparent way for them to either change their password or log off and log on again to do so.  The only way we could get around this was for them to call the help desk, we'd manually reset their passwords in Active Directory and then they could log in again using that new password.  An unacceptable solution in my opinion :)

So, I started to do some digging and found the following Microsoft KB article:
http://support.microsoft.com/kb/958900
which has an associated hotfix, that we applied and users were able to happily go on changing their passwords when they expired.

But then, we noticed something else.  Users that had two monitors set up at their station were experiencing an interesting symptom now: When the Server 2008 login screen came up, the dialog was now centered in between the two monitors (instead of only being in the primary), and it only showed the left half of the dialog in the primary monitor.  The secondary monitor was just completely black.

Oddly enough, the solution was to upgrade to Server 2008 SP2.  The service pack includes the previously mentioned hotfix, but for some reason did not have the same affect on dual monitors that the hotfix alone had.

I spent several hours scouring the web for a solution and didn't find anything -- so hopefully this will help you!

Thursday, August 27, 2009

How to add a DNS suffix from a command prompt

We recently moved our datacenter, which involved adding our servers to our new domain and related DNS server (let's call it mydomain.com).  Previously, users connected to a telnet server by hostname (let's call it server1).  Well, after moving our datacenter, users could no longer connect to the server unless they used its FQDN of server1.mydomain.com. 

At this point we knew we'd have to hit every single computer (hundreds) on our network to update their preset connections.  We had an option -- we could either join each machine to the domain, or we could add a DNS suffix of mydomain.com to their local area connection tcp/ip properties.  We opted for the latter, since joining each machine to the domain would take longer, and is unnecessary considering we're in the process of rolling out thin clients to replace each of these machines.

In order to add a DNS suffix to a TCP/IP connection remotely, all you need is a list of IP addresses and the following command:
wmic /USER:administrator /PASSWORD:adminpassword /node:@c:\iplist.txt nicconfig call SetDNSSuffixSearchOrder (mydomain.com)

Where C:\iplist.txt contains a list of IP addresses, line separated.

After running this command for all of the IP addresses, users could then resolve server1 without needing to type out the whole FQDN.  Of course, this command could also be put in a script if you wanted to use it in such a way as well. 

Happy Networking!

Monday, August 10, 2009

How to "forget" network share credentials so you can authenticate as another user

UPDATE (10/26/09): I recently stumbled across a case where I was getting the "multiple connections" error and the steps below did not resolve the issue. There is one more place to look (especially if you checked the box to "Remember my password" when connecting to the network share) to remove stored credentials. Go to Control Panel -> User Accounts. Then go to the Advanced tab and under "Passwords and .NET Passports" click on "Manage Passwords". If the server you're trying to connect to is listed there, you're in luck. Simply remove the entry, and log off and then back in again. You should then be prompted for your new credentials.

This is something that has saved me a lot of time by removing the need to log out from my current Windows session. Have you ever connected to a network share and then wanted to authenticate as another user and received the following error:
"Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed."?

For the longest time, I thought the solution was to log out and then back in again to authenticate as the other user. It turns out there is a much easier way to get your computer to "forget" your current credentials so that you can use alternative ones. Simply bring up a command prompt and type:
net use * /d /y
This will effectively disconnect all remote connections as well as their associated credentials.
If you have more than one network connection open and you'd rather just delete a specific connection, type in:
net use
in a command prompt window. This will list all of your current remote connections. To delete a specific connection (let's say to \\server\share), type in the following in the command prompt:
net use \\server\share /d /y

If you're more of a GUI fan, you can go to My Computer and then select:
Tools -> Disconnect Network Drive
Then just select the specific connection that you'd like to disconnect and click the OK button.

I hope that this saves you some time!

Tuesday, July 21, 2009

Diving Into CakePHP and Ruby on Rails

I love web development. It's the one kind of programming that seems to continually intrigue me and bring me back for more. I developed my first site using PHP (essentially on the WAMP stack) about 8 or 9 years ago, and haven't looked back. It wasn't until recently that I started looking into frameworks and libraries such as jQuery, Prototype, etc. I always preferred doing things with a text editor from scratch. Sure, it took longer, but I new the complete ins and outs of all of my website. Well, now that time is becoming more and more of a luxury, I can't afford to build sites from scratch anymore. Hence the discovery of two great frameworks: Ruby on Rails and CakePHP.

First, if you don't have any exposure to either of these technologies, there are some similarities in their paradigms. Both are designed to save you, the developer, time in building websites. Their goal is to do the monotonous, repetitive, underlying work for you so you can move on to build the rest of the website quickly. both use principles of MVC (Model View Controller) development. Essentially, their design forces you to use good design principles in your website. A very basic, and incomplete, description of the MVC components could be the following:
A Model is the data and associated interactions (Objects and Database)
A View is the presentation of that data (HTML, CSS)
A Controller is the logic used (Ruby/PHP)
A more complete explanation can be found here.
Another great thing that both provide is the ability to use REST APIs. So, for example, if you were to collect data that you wanted to open up to third-party applications, you could provide them with an API to access it. To me, this is a must to allow for layers of abstraction I previously mentioned in another post.

Ruby on Rails, until very recently, was a mythical and powerful creature to me. I had heard great things about it, heard what it could do, but I had absolutely no experience with it. I decided to change that this week. Within minutes I had completed the download and install of all required software to be up and running (I already had XAMPP on my machine from other development projects). Two great tutorials for learning basic Ruby on Rails can be found here and here. I went through both of them and feel like I have a good understanding of the design principles and capabilities of the technology. Ruby is a very interesting language and pretty easy to pick up. Ruby on Rails seems to be picking up steam in the US as of late, and is being used by websites such as Twitter.

Now, at first I was thinking, why learn another language like Ruby? I already have a good grip on PHP, so why not just stick with that? Great question! (Yes, sometimes I answer my own questions.) That's why I first looked at using CakePHP (plus, my friend was interested in learning it as well). According to the creator of CakePHP, he liked the idea of Ruby on Rails and wanted to bring that to the world of PHP, so he basically took the idea. To quote him from here:
"While it's difficult to copy Rails in PHP, it's quite possible to write an equivalent system. I like the terseness of Ruby code, but I need the structure that Rails provides, how it makes me organize my code into something sustainable. That's why I'm ripping off Rails in Cake."
I followed along with the tutorials on the CakePHP website and had no problem setting this environment up either. I just dropped it into my existing XAMPP setup and ran with it.

I also found plenty of "CakePHP vs. Ruby on Rails" type discussions. In the end, I surprisingly don't have much of a preference. Both were easy to set up, both were easy to learn the basics of, and both provide the framework principles I was looking for. I have a feeling that with more exposure I'll be able to choose a side more easily, but in the meantime, I'm just waiting around for a real project to try these out on. What about you? Which do you prefer?

Tuesday, July 14, 2009

Bible Gateway Search for Ubiquity 0.5

When I upgraded the Ubiquity extension in Firefox this morning to version 0.5, the Bible Gateway search command that I created died.  Apparently, it used a deprecated version of the API -- and so did the other Bible search command that I had, rendering both useless.  If you are completely unfamiliar with Ubiquity, it's an awesome extension available for Firefox that allows you to quickly carry out common tasks (similar to something like Spotlight or Quicksilver).  For more information on the extension, you can go here.

So, if you're interested in updating your commands to reflect the new API, you should probably start at the new command authoring tutorial.  There are some great examples in there on how to get started.

For anyone interested, I updated my Bible Gateway Search command to the following:
CmdUtils.CreateCommand({
  names: ["bible-gateway"],
  icon: "http://www.biblegateway.com/favicon.ico",
  author: { name: "Matt Augustine", email: "sokkerstud_11@hotmail.com"},
  license: "GPL",
  description: "Launches a passage look up on BibleGateway.com",
  help: "bible-gateway (passage query)",
  arguments: [{role: "object",
               nountype: noun_arb_text,
               label: "passage"}],
  preview: function( pblock, arguments) {
    var template = _("Searches BibleGateway.com for ") + arguments.object.text;
    pblock.innerHTML = CmdUtils.renderTemplate(template);
  },
  execute: function(arguments) {
    var url = "http://www.biblegateway.com/passage/?search={QUERY}"
    var query = arguments.object.text;
    var urlString = url.replace("{QUERY}", query);
    Utils.openUrlInBrowser(urlString);
  }
});

Basically, it just does a very simple passage lookup and opens it up.  So, for example, valid queries would be things like:
bible-gateway John 1-3
bible-gateway Matthew 1:4-5

If you would like to install this command in your browser, feel free to copy the code above, or you can install it by going here.  If you have any questions about my command, feel free to leave a comment below!

Tuesday, July 7, 2009

Overcoming Information Overload - Layers of Abstraction

With all of the new web services coming out, seemingly on a daily basis, it is easy to feel overwhelmed with the amount of data that is available to us on the Internet. It used to be easy: pull up your favorite search engine, type in what you're looking for and voila! Now, there are search engines, social sites, RSS feeds, blogs, forums, and the list goes on and on (each with their own built in search engines of course). Plus, trying to keep up with updating several different pages -- Facebook, MySpace, Twitter, flickr , Blog of choice, and don't forget good ol' fashioned email, is quite the daunting task... at least for anyone with a normal social/family schedule. The question then becomes, how do get (and provide) the information we need in a way that is simple, relevant and relational?

Simple: In order to find data easily, there needs to be one interface for doing so. While search engines try to provide this service, it is nowhere near complete. RSS readers like Google Reader are great tools for collecting feeds that you enjoy, but don't solve the relevant and relational pieces completely. Even then, you're responsible for finding the feeds and adding them. On the social front, applications are just starting to scratch the surface on data integration -- a current example would be TweetDeck, which allows you view Twitter and Facebook statuses simultaneously. It would be nice to also have one interface for updating your own personal statuses and websites from one location. Sites like Posterous and Tumblr are really making great progress here, but there's still room for improvement.

Relevant: Sifting through the tons and tons of news articles, tweets, updates, and blog posts (and beyond!) to find data that is relevant to you personally can be quite the chore as well. It would be great to go to one interface that would present you with all of the data you would be interested in across several web services and search engines. Sure, there are plenty of suggestions and recommendations for who to follow, what to subscribe to, who to listen to, etc., but it would be even better if we could extract the relevant data from each of those sources and have it be presented in one unified interface. PostRank is a service that definitely has the right idea. Even their slogan is right on: "Find & Read What Matters." Beautiful. Sites like TechMeme, Google News and Newspond, also attempt to crawl the web to find the most breaking, relevant stories for a subset of topics.

Relational: Web 2.0 has, in large, been about socializing everything on the web, and I think that it has inspired some truly amazing content. There's no reason to slow that down. There needs to be an easy way to view and share everything that you find with your friends and colleagues. It would be great to have one interface that would allow me to reply appropriately (based on the sharing context -- whether the source is Twitter, Facebook, Google Reader, etc.) to a friend, or just share data with anyone. The problem right now is that there are so many social services that we can belong to, and trying to keep each one up to date -- which I touched on in the "Simple" section. Finding and sharing items with friends is a must in the next iteration of data discovery on the Internet.

So, the next question, then, is how do we achieve these goals? My opinion is that we could accomplish this through the use of layers of abstraction. For example, Techmeme could be considered one layer of abstraction in that it gathers news articles from various baseline sources and presents a subset of those articles. In order to present relevant information simply to an end user, I would suggest that we would really need at least another layer of abstraction. Collect the news articles from sources like Techmeme, Newspond, Google News, etc. and present the user with data that is relevant to them based on interests, reading habits, recent conversations, etc. From that new abstract interface they would be able to share any of those items with anyone and any service, updating all of their profiles, blogs, etc. accordingly. All of this would be one small slice of the pie, considering you would also incorporate search and other social functionality among other things. So, I guess you could say that I see the current web services as necessary building blocks (that will need to remain in place) to reach that next level of finding and sharing data in an attempt to overcome information overload.

I'm sure there are plenty of services that I have not yet discovered -- so feel free to share your favorites in the comments section below!

Wednesday, June 24, 2009

"The specified port is unknown" When Adding a Network Printer

We currently have a Windows 2008 x64 Server with print services installed and we are sharing several network printers through that server. Every time a user logs in, a vbs script is run that maps certain printers for them depending on which groups they belong to in Active Directory. All was well and good until randomly we started getting the error message "The specified port is unknown" on one of our servers when trying to add the printers. So, instead of mapping the specified printers, the logon script failed leaving the user printerless. I then went on to test adding the printer manually through the control panel, and to my surprise, I got the exact same error. What gives?

I tried several things on the server hosting the printer, but it made absolutely no difference whatsoever. Finally, I thought, maybe if I just restart the Print Spooler service on the server the user was logging into. As soon as I did that, everything worked again! So, I'm not exactly sure what caused the problem to occur in the first place, but I'm glad that I know how to fix it now.

Bonus: We were also having issues with the wrong username/job owner showing up on the printer separator sheets. Instead of printing the username of the person who printed, it would print the username of the last Domain Admin to print something. We searched Google near and far and came back with nothing. After running out of ideas, we pinged Microsoft support on the issue to discover that this is a known problem with Server 2008 and Vista SP1 and there's actually a hotfix available for it: http://support.microsoft.com/kb/958741 I hope that someone finds this useful, since I had such a hard time finding the solution.

Tuesday, June 9, 2009

Getting around the Windows Rearm Limit with Sysprep and Altiris

We are currently in the process of deploying 9 IBM Blade servers as a Citrix XenApp farm for our employees. After evaluating the options for maintaining the servers, and their images, we settled on using Altiris. The past several weeks has been consumed with coming up with a "Golden" image that we can use across all 9 servers so that each user's experience will be consistent and predictable (well, as much as possible anyway). Coming up with this "Golden" image required going through a few iterations of a Server 2008 build, installing applications, customizing options and updates, etc. We finally came to the point yesterday when we were ready to make one last change and take a final image -- and this is when I ran into a problem. For the life of me, I could not get Altiris to take the image properly from the "Golden" server.

For the record, we've never used Altiris before, so for the most part we kept most of the default settings: we chose a path to save the image to, we chose to sysprep (using the default answer file) the server, and then take the image. I was finally able to narrow down the problem to sysprep not running correctly. So, at that point, I went on to the server to try and manually run sysprep and got a fatal error! (A full description of the problem can be found in this MS knowledgebase article).

Basically, it boils down to these facts: When Altiris syspreps a server it generalizes it (as it should since this strips out all of the uniqueness of the server, so it can be applied to other servers), resetting the licensing information (or rearming it), and whatever else it does to prepare the image. Now, this is all well and good except for the fact that Microsoft limits the number of rearms to 3 for any given Vista/Server2008 image. So, not really being exposed to this world before, I burned up these 3 rearms very quickly :)

At that point I scoured the internet for workarounds for this problem, since I had a golden image, but could not sysprep it. I tried the answer file solution proposed in the knowledgebase article above, but could not get it to work for whatever reason. Finally after much searching I came across this article, which describes how to get around this limitation.

So, to solve this problem (if you've read the entire post until this point, well done -- but if you just skipped down here to the solution, I don't blame you), I just went into the registry on the "Golden" server and set the following key to a value of "1":
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurentVersion\SL\SkipRearm (For Windows 7: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\SoftwareProtectionPlatform\SkipRearm, thanks Mike!)
Without a restart, I was able to run the Altiris imaging job again successfully. It cost me a day of work, so I hope someone finds this helpful. If you have any questions, feel free to leave a comment below.

Note: I also noticed that I had to do this registry change EVERY time before taking an image. The image does not retain this value, it gets reset after doing the sysprep.

Thursday, May 28, 2009

Importing Users into Active Directory on Server 2008

I have recently been tasked with figuring out how to import a large number of users into Active Directory in our Server 2008 environment. Now, I know there are several options out there, and I'm sure someone will try and sell me something in the comments section, but I found a way to do it and it works. And best of all, it's free!

I am utilizing a few different things to accomplish this task: csvde (built into Windows Server), admod (a free utility available here), and a batch file. The data sources are two files: a CSV file with the user data, and a tab-delimited file with user passwords.

There is a site with great examples on how to do a basic import with csvde located here. From that page you will find several other links to more complicated examples, and more in-depth looks at the procedure.

First things first, we need to create a CSV file with the desired LDAP attributes. For my purposes, I decided on the following:
  • DN - Distinguished name in the directory (ex: CN=user,OU=employees,DC=domain,DC=com)
  • objectClass - Defines the type of object to create (ex: user)
  • name - The user's name -- equivalent to CN (ex: Joe Smith)
  • displayName - self explanatory (ex: Joe Smith)
  • userAccountControl - This one takes a little math, but a good explanation can be found here (ex: 514)
  • sAMAccountName - this is what shows up in the pre-Windows 2000 logon name field in ADUC (ex: jsmith)
  • mail - email address (ex: jsmith@domain.com)
  • givenName - the user's first name
  • sn - the user's last name
  • userPrincipalName - this defines the user's logon account (including domain) (ex: jsmith@domain.com)
Not all of these are necessarily required (though some are), so you may need to play around with the options to get it just right for your environment. Remember: the first line of your CSV file needs to be the attribute names.

Now, the one downside to csvde is that it does not take user passwords into account. So, if you have a domain policy that has a strict password policy, the import may fail. That's why, in my example, I chose the userAccountControl value of 514 (this defines the object as a normal user, but as a disabled account). Next, we'll need to define passwords. For this I created a separate text file (passwords.txt) that is a tab-delimited file that contains two values per row: the DN and the associated password. In a moment, you'll see why.

Since we now have our source files ready to go, we're good to import! For this I created a batch file that looks like the following:
@echo off
echo "Importing from CSV File"
csvde -i -v -k -f %1
pause

echo "Setting Passwords and enabling accounts. Passwords never expire."
REM Loop through password file and update records
FOR /F "tokens=1,2 delims= " %%G IN (%2) DO (
admod -b %%G unicodepwd::%%H -kerbenc
admod -b %%G "userAccountControl::66048")

UPDATE: Apparently this Blog removes Tabs and replaces them with spaces. In the line "FOR /F "tokens=1,2 delims= " %%G IN (%2) DO (" above, there is actually a TAB in between "delims=" and the closing quotation mark. Be sure to remove the space and replace it with a Tab, by pressing the Tab key. Thanks to Jennifer for helping discover this!

This batch file takes two parameters: the first being the path to the csv source file and the second being the path to the tab-delimited password file. Now let's take a look at what we're doing here :)
In the first section we are bulk importing users with the following command:
csvde -i -v -k -f %1
The "i" option switches to import mode. "v" is verbose, "k" means to ignore common warnings and "f" means to use the file located by the following value. "%1" is just the way to specify the first command line argument to the batch file. When this command has completed, the users will be populated into Active Directory, but will be disabled and without a password.

Enter the next chunk:
FOR /F "tokens=1,2 delims= " %%G IN (%2) DO (
admod -b %%G unicodepwd::%%H -kerbenc
admod -b %%G "userAccountControl::66048")

This loops through the tab-delimeted file specified by the "%2" (second command line argument) and does two things. First:
admod -b %%G unicodepwd::%%H -kerbenc
This command modifies the object specified by the DN in the text file to have the associated password. The passwords are saved as plain text in the file.
And second:
admod -b %%G "userAccountControl::66048
This command modifies the object specified by the DN to update the user account control value. The value specified here enables the account (now that it has a valid password) and sets it so that the password never expires -- which was something just for our environment. Take another look at the control codes to set this value to whatever you need.

And that's it! Now you have an Active Directory populated with enabled users with valid passwords. If you have any questions about this process, feel free to leave a comment below.

*DISCLAIMER* Always do stuff like this in a test environment first. I in no way guarantee this will work for you in your environment, or that it won't completely break your directory -- so again, always test first!

Monday, April 20, 2009

How to Use the Arc90 PHP API to Get Info From Twitter

I recently decided, as a programmer and self-proclaimed web developer, to dive into the Twitter API, mostly just for fun.  From the main Twitter API Page you can access a series of prebuilt libraries for various programming languages, which provide wrappers around the main Twitter API.  With PHP being my web developing language of choice, I quickly glanced over the libraries available, with one clearly standing out: Arc90.  Not only is it maintained by a fellow "Matt," it also provides access to mostly anything you could think of in the world of Twitter.  By the end of this article, you should be able to pull all of your followers from Twitter and list some basic data about them.

Installation and Environment: The main page for Arc90 provides a very basic introduction on how to install and use the package, which is sufficient for getting your PHP environment ready to use.  So, for the purpose of this, I will assume that your PHP environment is all set, your includes are in order and cURL is ready to go.  If you need further help with any of those areas, feel free to leave a comment below.  So, I will assume you have a set up like the following:

  1. <?php  
  2.   
  3. require_once('Arc90/Service/Twitter.php');  
  4.   
  5. $twitter = new Arc90_Service_Twitter('username''password');  
  6.   
  7. try  
  8. {   
  9.     // Gets the users following you, response in JSON format  
  10.     $response = $twitter->getFollowers();  
  11.   
  12.     // Store the JSON response  
  13.     $theData = $response->getData();
  14.   
  15.     // If Twitter returned an error (401, 503, etc), print status code  
  16.     if($response->isError())  
  17.     {  
  18.         echo $response->http_code . "\n";  
  19.     }  
  20. }  
  21. catch(Arc90_Service_Twitter_Exception $e)  
  22. {  
  23.     // Print the exception message (invalid parameter, etc)  
  24.     print $e->getMessage();  
  25. }  

  26. ?>
Note a few changes: Obviously, on line 5 you will need to replace 'username' and 'password' with your login credentials.  On line 10, I went ahead and removed the parameter to request an XML response.  I saw that the default was JSON and wanted to try that (since that is another new technology for me).  I'm also using the getFollowers() API call instead.  On line 13, I'm storing the returned data for future use. 

Important Note Regarding API Rate Limits: If you are unfamiliar with Twitter and its associated API calls, you need to make sure that you don't overdo it on the number calls you make.  By default, Twitter allows you to have 100 API calls per hour (this includes using third party applications like TweetDeck or Nambu), so you'll need to keep an eye on how many calls you're using.  If you go over that limit, you can potentially be blacklisted -- which makes no one happy.  You have a couple of options: 1) Request to be whitelisted using this form.  This will allow you to use 20,000 requests per hour, or 2) Watch your rate limit using a built-in API method which monitors your status, and conveniently does not cost you an API call to use.  You can find more info regarding limits, whitelisting and blacklisting here.  Also, during development, I've gone and saved the output to a particular API call to a text document that I can parse while I test.  It doesn't provide real-time data, but it allows me to work on it as many times as I want without a single API call.

Using the Data: Now that we have our data response in JSON format to work with, let's get to it! To make good use of the data returned, I chose to put it in an associative array for organized access.  To do this, we use the json_decode method built into PHP as follows:
$dataArray = json_decode($theData, true);
To view an example of the kind of data that a getFollowers API call will return, we can view this method page in the documentation.  Also, for a detailed description of each key that can possibly be returned from any of the various API calls, you'll want to keep the complete list handy.  I ended up making a spreadsheet with the various API calls that I tested along with the associated return values, for a quick reference -- so you may want to do the same.

Let's say that we just want to loop through our followers (or at least the first 100) and print out some data about each one, line by line.  All we have to do is iterate through the array, pulling out the data we want and display it.  I stuck with a basic HTML table for this, but all you CSS gurus out there can have way more fun with this.  This assumes that you already have your associative data array, dataArray, ready to go:
  1. <html>
  2.         <head>
  3.             <title>Test</title>
  4.         </head>
  5.         <body>
  6.             <h1>Followers</h1>
  7.            <table border="1">
  8.                 <tr>
  9.                     <td>Image</td>
  10.                     <td>Screen Name</td>
  11.                     <td>Name</td>
  12.                     <td>Description</td>
  13.                     <td>URL</td>
  14.                     <td>Location</td>
  15.                     <td>Following</td>
  16.                     <td>Followers</td>
  17.                     <td>Status</td>
  18.                     <td>Status Count</td>
  19.                 </tr>
  20. <?php
  21.     $numItems = count($dataArray);
  22.     for($i = 0; $i < $numItems; $i++){
  23.         $currentItem = $dataArray[$i];
  24.        
  25.         $screenName = $currentItem["screen_name"];
  26.         $description = $currentItem["description"];
  27.         $followersCount = $currentItem["followers_count"];
  28.         $url = $currentItem["url"];
  29.         $name = $currentItem["name"];
  30.         $status = "";
  31.         if(isset($currentItem["status"]))
  32.         {
  33.             $status = $currentItem["status"]["text"];
  34.         }
  35.         $friendsCount = $currentItem["friends_count"];
  36.         $profileImage = $currentItem["profile_image_url"];
  37.         $location = $currentItem["location"];
  38.         $statusCount = $currentItem["statuses_count"];
  39.          echo "<tr><td><a href='http://www.twitter.com/".$screenName."'><img height='48' width='48' src='".$profileImage."'></a></td><td>".$screenName."</td><td>".$name."</td><td>".$description."</td><td>".$url."</td><td>".$location."</td><td>".$friendsCount."</td><td>".$followersCount."</td><td>".$status."</td><td>".$statusCount."</td></tr>";
  40.     }
  41. ?>
  42.     </table>
  43. </body>
  44. </html>
 You should end up with something that looks like:

If you have any trouble working through this, or have any questions, just let me know!

Monday, April 13, 2009

Dipping My Toes in the Twitter Pool: A Beginner's Analysis

There's no doubt that the micro-blogging service Twitter has exploded in popularity within the last month or so. More and more websites and blogs seem to have Twitter accounts, and the service doesn't seem to be going away anytime soon -- even though they do have trouble from time to time supporting the demand. Being the person that I am, I had no choice but to check out Twitter and create an account, so I could check out this flourishing service for myself.

If you are a power Twitter user, or feel like you are already comfortable with the service, this post probably isn't for you. But, if you're new to it, trying to make sense of everything, I hope this will at least give you a little insight. Here are my discoveries:
Setting up a Bio: I tend to fall on the paranoid side of things when it comes to privacy on the Internet. I'm always concerned anytime I enter information even as simple as an email address -- You can imagine, I really didn't know what to put in my bio. So, at first, I didn't put anything. After previewing a few different accounts, I've at least decided to improve upon that. I have added my name (just my first name for now) and a quick description of what I'm interested in and what I do. There's now even a nice little mugshot up there. I've come to the conclusion that it's very important to have personal and relevant information about yourself in your bio. Other users like to be able to put a face with a name, and if you're trying to build a community of users with similar interests, telling a little bit about yourself is just as important. Plus, you're proving to other users that you're not just a bot when you follow them (more on that in a bit).

Public Versus Protected Updates:
Twitter gives you the option to protect your updates. If you choose this mode, your updates will not be visible to anyone but those you grant explicit rights to, as you will have to approve every follower individually. Naturally, I started down this path, being on the privacy paranoid side of the fence. Once I began to better understand what I believe to be the purposes of the Twitter service, I decided to remove that restriction -- now I just make sure that I don't post any updates that are too revealing.

Following, Followers and Blocking:
At first, I was really unsure of who I should be following. I did a quick search through my email contacts and discovered that only a couple of my friends were on Twitter -- so I followed them. So, that left me with a few people I was following and even fewer followers. To really utilize Twitter, and all that it can offer, following people you haven't met in person is a must. My first venture into adding new users to follow was going through the users that my friends were following. I figured, hey, if they're interested in a particular user, I may be as well. Once I was done with that, really all I had left to do was search. Now, at this point, one thing that Twitter is really lacking is a good way to search for people you would probably be interested in following. However, there are other websites that have capitalized on this lack of search (We Follow is the directory I currently prefer, @wefollow). Followers, on the other hand, can be a little bit harder to come by -- which in my opinion, is a good thing. I really see Twitter as a great place to get real time feedback from a community of users with similar interests and occupations. However, sometimes followers come a little too easily. Every time I'm notified that I have a new follower, I check out their bio to see who they are. If they seem to be a bot of any kind, any kind of random marketer, someone who is just trying to increase their own follower count, or otherwise not a regular human, I actually block them from following me. I'm still not sure if this is the "proper" way of handling this, but it seems to be working out fine so far -- I don't want to follow anyone like that, and really, as people look through my followers, I wouldn't want to be associated with anything like that either.

Twitter as a News Source: One thing that I absolutely love about Twitter is the real-time aspect of it. With that comes real-time news updates. The best all-around news source that I've discovered up to this point is Breaking News (@BreakingNews). There are several other technology-related websites that I also follow on Twitter that provide close to real-time updates in the world of technology. At this point, I really can't consider Twitter as a viable alternative to RSS. I use Google Reader for dozens of feeds (including items my friends have shared), and I like the ability to go back and view the titles and brief overviews as I have time, and be able to email the stories to friends and family, as well as comment on stories with other Google Reader users. Until there is a better way to organize, preview and share all of those news stories in Twitter, I think I'll just stick to the headlines for now.

Battle of the Twitter Clients: There really seems to be a "Gold Rush" type opportunity for those building Twitter clients at the moment. In my opinion, the frontrunners in the Windows world are TweetDeck (@TweetDeck) and Twhirl/Seesmic Desktop. And then on Mac OS X, TweetDeck and Nambu (@NambuCom) are the clients I'm currently keeping up with. If you're not using a third-party Twitter client, you're really missing out on the full potential of the service. For example, with TweetDeck, you can use a multi-column layout that allows you to filter tweets based on different criteria (such as by groups of users that you define, or a search on Twitter, or all of your direct messages, etc.). Also, with TweetDeck, you can tie into your Facebook account and set up another column for all of your Facebook Friends' status updates. Options like these make having a third-party client priceless.

Don't Worry About it: Above everything else, don't worry about the number of followers you have, or if you can't think of what to post. It really does take some time getting used to the environment and the community of users out there. But, if you do invest the time, Twitter can really be used to enhance your experience on the web, build a community of new friends, and keep up with what's going on in the outside world. The important thing is to be yourself, and post things that are relevant to you. Or, if you really like what someone else has posted, feel free to re-post it (giving them the appropriate credit of course.) So, if you've made it all the way to the end of this and have any additional advice you'd like to add -- or have any questions you think I may be able to answer, feel free to leave a comment below!

Wednesday, March 11, 2009

Building PHP on AIX

This is more for my own benefit than anything else (or for anyone who stumbles across this through Google, or whatever search engine you prefer). I had quite the time building PHP on AIX, so I just wanted to chronicle what I did, including getting the PHP Excel package to work.

I roughly followed the steps for installing PHP on AIX available at:
http://www.ibm.com/developerworks/wikis/display/WikiPtype/aixopen

That walked me through at least installing prerequisites, and any other Linux/Gnu tools I may want.

Now, I just wanted the command line executable, so I skipped Apache all together. So, I did the following to build it:
export PATH=/opt/freeware/bin:$PATH
export CC=xlc
export CXX=xlc++
./configure --prefix=/usr/local/php --with-gd --with-zlib-dir=/opt/freeware --enable-shared --enable-debug --enable-zip --disable-static --with-zlib --with-bz2 --with-jpeg-dir=/opt/freeware --with-png-dir=/opt/freeware --with-xpm-dir=/opt/freeware --with-freetype-dir=/opt/freeware
ulimit -S -d unlimited
gmake
gmake install

In case you're wondering, the ulimit command removes restrictions on how much memory can be used by processes in the current session. I also had to use gmake. With make, it just failed over and over and over again.

Now, the whole point of installing php was so that I could create Excel xlsx files (which can go beyond the line limit in Office 2003), so I used the current release found at:
http://www.codeplex.com/PHPExcel

Since I was building extremely large spreadsheets, I had to update the memory limit in php.ini (and I upped the max execution time for good measure). But, even with this, I was hitting memory caps. So, the first thing I did was use the same ulimit command (mentioned above) before executing the script. This helped with the medium-sized spreadsheets, but with the large ones I was getting a segmentation fault.

After a little research, I discovered the following page:
http://www-01.ibm.com/support/docview.wss?rs=639&context=SSTFW4&dc=DB520&dc=DB560&uid=swg21302795&loc=en_US&cs=UTF-8&lang=en&rss=ct639tivoli

It explains why, when using XML (which the 2007 format uses), it can cause issues with processes that hog memory. So, I just took the solution from the page, which was to use the following command (in addition to the ulimit command) before calling the php script:
export LDR_CNTRL=MAXDATA=0x80000000

It was a mess, but I'm pretty sure it'll work now -- regardless of file size! It just takes awhile. The last test I did with 10 columns and around 66,000 rows took about 25 minutes and 800MB of memory to complete! Now if only PHPExcel would work on the amount of memory required (which I believe they are) we'll be in business.