<?xml version="1.0" encoding="UTF-8"?><rss
version="2.0"
xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:wfw="http://wellformedweb.org/CommentAPI/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:atom="http://www.w3.org/2005/Atom"
xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
xmlns:georss="http://www.georss.org/georss"
xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
> <channel><title>Brett Batie</title> <atom:link href="https://brett.batie.com/feed/" rel="self" type="application/rss+xml" /><link>https://brett.batie.com</link> <description>Thoughts of a Software Engineer.</description> <lastBuildDate>Mon, 20 Jun 2016 15:31:57 +0000</lastBuildDate> <language>en-US</language> <sy:updatePeriod> hourly </sy:updatePeriod> <sy:updateFrequency> 1 </sy:updateFrequency> <generator>https://wordpress.org/?v=5.9.12</generator> <item><title>Git Flow &#8211; Remove Local &#038; Remote Feature Branches</title><link>https://brett.batie.com/uncategorized/git-flow-remove-local-remote-feature-branches/</link> <comments>https://brett.batie.com/uncategorized/git-flow-remove-local-remote-feature-branches/#respond</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Thu, 16 Jun 2016 14:00:02 +0000</pubDate> <category><![CDATA[Software Development]]></category> <category><![CDATA[Uncategorized]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=703</guid> <description><![CDATA[<p>I love using the Git branching model outlined by Vincent Driessen. My flow is almost identical to what he has described except for where the feature branches are stored. Specifically he states: Feature branches typically exist in developer repos only, not in origin. I&#8217;m a little less trusting with feature branches and instead often push [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/uncategorized/git-flow-remove-local-remote-feature-branches/">Git Flow &#8211; Remove Local &#038; Remote Feature Branches</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>I love using the <a
href="http://nvie.com/posts/a-successful-git-branching-model/">Git branching model outlined by Vincent Driessen</a>. My flow is almost identical to what he has described except for where the feature branches are stored. Specifically he states:</p><blockquote><p> Feature branches typically exist in developer repos only, not in origin.</p></blockquote><p>I&#8217;m a little less trusting with feature branches and instead often push them to the central remote repository. For me, I fear that days of work could be lost by keeping the work solely on my repo until it is ready to be merged back into the develop branch. I can imagine my repository being corrupted, my hard drive dying or my computer being stolen all causing work loss. I don&#8217;t like to leave my hard work to <a
href="https://en.wikipedia.org/wiki/Murphy%27s_law">Murphy&#8217;s law</a>.</p><p>Now one could use a backup strategy to mitigate this issue and I do highly recommend doing this. I have used <a
href="https://www.crashplan.com/en-us/">CrashPlan</a> for years and it has saved me a few times, making CrashPlan worth every penny. However, I also like to know there is a central Git repository holding my code (again a trust issue). Plus some companies will not allow their code to be placed on any server outside of their network.</p><p>Since myself and my entire team have chosen to place our &#8220;feature-*&#8221; branches on our central remote repository, it has caused a small issue. We find that we end up with a lot of old and forgotten branches on this remote repository as well as our local machines. Of course Git is great at this task of removing the old branches.</p><h3>Delete Local &amp; Remote Branches</h3><p>We can remove the remote branch by using the command:</p><pre><code class="bash">git push origin --delete &lt;branchName&gt;
</code></pre><p>Then we can remove the local branch with the command:</p><pre><code class="bash">git branch -d &lt;branchName&gt;
</code></pre><p>Great, problem solved I can run those two commands each time I&#8217;m done working on a feature. But wait, I don&#8217;t want to remember those two commands and I want to run only one command. So, I put together a couple bash functions that will make this easier. Just drop these functions in your <code>~/.bashrc</code> file or in the appropriate dotfiles directory if you happen to be using my <a
href="https://github.com/brettbatie/dotfiles">Dot File Manager</a>.</p><pre><code class="bash">function confirm () {
    # call with a prompt string or use a default
    read -r -p "${1:-Are you sure? [y/N]} " response
    case $response in
        [yY][eE][sS]|[yY])
            true
            ;;
        *)
            false
            ;;
    esac
}
function git-delete-branch () {
  if [ "$#" -ne 1 ]; then
      printf "Usage: $FUNCNAME branchName\nWill delete the specified branch from both the local repository and remote\n";
      return 1;
  fi
  echo "Delete the branch '$1' from your local repository?" &amp;&amp; confirm &amp;&amp; git branch -d $1;
  echo "Delete the branch '$1' from the remote repository?" &amp;&amp; confirm &amp;&amp; git push origin --delete $1;
}
</code></pre><p>After adding these functions don&#8217;t forget to use the <a
href="http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x237.html">source command</a> to load the functions into your shell.</p><p>Now a branch can be easily deleted from both the local and remote repository or just one of them by executing the command:</p><pre><code class="bash">git-delete-branch &lt;branchName&gt;
</code></pre><h3>Delete Merged Branches</h3><p>Since we have multiple developers all pushing their branches to the central repository, we often find that over time we start to get a long list of orphaned branches (branches that no one is working on anymore). Any easy way to clean up most of these branches is with the following command which will show the branches that have already been merged into your current branch.</p><pre><code>git branch --merged
</code></pre><p>Then each branch can be manually deleted with the command:</p><pre><code>git push origin --delete &lt;branchName&gt;
</code></pre><p>Again, I like things simple so I put together the following function to streamline the process:</p><pre><code>function git-delete-merged-branches () {
  echo &amp;&amp; \
  echo "Branches that are already merged into $(git rev-parse --abbrev-ref HEAD) and will be deleted from both local and remote:" &amp;&amp; \
  echo &amp;&amp; \
  git branch --merged | grep feature &amp;&amp; \
  echo &amp;&amp; \
  confirm &amp;&amp; git branch --merged | grep feature | xargs -n1 -I '{}' sh -c "git push origin --delete '{}'; git branch -d '{}';"
}
</code></pre><p>Make sure you use the the <code>confirm</code> function pasted above.</p><h3>Another tool</h3><p>Another possible solution to the above stated problem is to use the <a
href="https://github.com/nvie/gitflow">git-flow</a> tool. At first glance it seems to be a handy tool to automate many of the tasks in the GitFlow process.</p><p>However, in my short experience with it, I found it doing things I didn&#8217;t expect. I then checked the github repository and noticed an unhealthy amount of open issues (175) and pull requests (78) with the last commit date being September of 2012. This project is officially dead in my book.</p><p>More recently, I have found out that there is another fork of this project that is being kept up-to-date. It&#8217;s called <a
href="https://github.com/petervanderdoes/gitflow-avh">git-flow (AVH Edition)</a>. This one might have more potential.</p><p>However, I think the GitFlow approach is easy enough to carry out with the standard Git tool that I have not yet taken the time to test out this newer git-flow (AVH Edition) tool.</p><p>Have you used the newer git-flow tool? What are your experiences with it? Do you have any other git commands or tools that have helped with your day-to-day Git operations?</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/uncategorized/git-flow-remove-local-remote-feature-branches/">Git Flow &#8211; Remove Local &#038; Remote Feature Branches</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/uncategorized/git-flow-remove-local-remote-feature-branches/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item><title>Install a Free SSL Certificate (LetsEncrypt) on CentOS with Apache and WordPress</title><link>https://brett.batie.com/software-development/__trashed-2/</link> <comments>https://brett.batie.com/software-development/__trashed-2/#respond</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Sat, 04 Jun 2016 16:16:10 +0000</pubDate> <category><![CDATA[Software Development]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=705</guid> <description><![CDATA[<p>I&#8217;ve had my blog running on port 80 for years and have finally decided it is time to deprecate HTTP and move everything to a secure SSL connection. This decision was a lot easier to make now that Let&#8217;s Encrypt is providing free SSL certificates and has been out of beta since April. I also [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/__trashed-2/">Install a Free SSL Certificate (LetsEncrypt) on CentOS with Apache and WordPress</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>I&#8217;ve had my blog running on port 80 for years and have finally decided it is time to <a
href="https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/">deprecate HTTP</a> and move everything to a secure SSL connection. This decision was a lot easier to make now that <a
href="https://letsencrypt.org/">Let&#8217;s Encrypt</a> is providing free SSL certificates and has been <a
href="https://letsencrypt.org/2016/04/12/leaving-beta-new-sponsors.html">out of beta since April</a>. I also appreciate that the entire installation can be done via command line and that the certificate can be automatically renewed a month before it expires. Wahoo, no more pesky calendar reminders to tell me to hurry up and buy a new certificate before it expires and manually install it.</p><p>With that said, my blog is currently running on CentOS 6 with Apache with vhost files placed in a non-standard directory by <a
href="https://www.directadmin.com/">DirectAdmin</a>. This means I will have to manually add the certificate information to the vhost file for each host instead of letting Let&#8217;s Encrypt do all the work for me. That&#8217;s ok though, as I will only have to do this once.</p><p>Also, CentOS 6 will throw a little curve ball as it doesn&#8217;t have Python 2.7 setup by default and depends on Python 2.6 for <a
href="https://access.redhat.com/solutions/9934">yum</a>. So, care will be taken to get them both setup on the machine.</p><p>Here are the steps needed to set up Let&#8217;s Encrypt. First, we need to set up the IUS repository with the following commands:</p><pre><code>wget https://centos6.iuscommunity.org/ius-release.rpm
sudo rpm -Uvh ius-release*.rpm
rm ius-release.rpm
</code></pre><p>Then, we need to get Python setup:</p><pre><code>sudo yum update
sudo yum install centos-release-scl python27 python27-devel python27-pip python27-setuptools python27-virtualenv
</code></pre><p>Next, we will install pip as that gives us any easy way to install Let&#8217;s Encrypt and update it in the future.</p><pre><code>sudo easy_install-2.7 pip
</code></pre><p>Now we will install Let&#8217;s Encrypt (which also goes by the name certbot)</p><pre><code>sudo pip2.7 install letsencrypt letsencrypt-apache
</code></pre><p>If you happen to have a very vanilla Apache setup and are running Debian then the following command to generate and install the certificate may magically setup everything for you. This was not the case for me so I didn&#8217;t use this step.</p><pre><code>sudo certbot --apache -d brett.batie.com
</code></pre><p>Or, you could try to be more specific about where your config files are located for Apache as I did in the following command. However, at the time of this writing this does not work if your <a
href="https://github.com/certbot/certbot/issues/2776">vhost file has more than one vhost in it</a>. So, I didn&#8217;t use this step either.</p><pre><code>sudo certbot --apache --apache-server-root /etc/httpd/ --apache-vhost-root /usr/local/directadmin/data/users/admin/httpd.conf -d www.brett.batie.com
</code></pre><p>The route that I actually took was to use the following command which only generates the certificate. This is the <a
href="http://letsencrypt.readthedocs.io/en/latest/using.html#webroot">webroot</a> approach which differs from the above <a
href="http://letsencrypt.readthedocs.io/en/latest/using.html#apache">Apache</a> approach.</p><pre><code>sudo certbot certonly --webroot --webroot-path /home/admin/domains/brett.batie.com/public_html/ -d brett.batie.com
</code></pre><p>This provided output like the following:</p><pre><code>IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/brett.batie.com/fullchain.pem. Your cert will
expire on 2016-09-01. To obtain a new or tweaked version of this
certificate in the future, simply run certbot again. To
non-interactively renew *all* of your certificates, run "certbot
renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
</code></pre><p>Sweet, I have a certificate! Now, I just have to tell Apache to use it. My vhost file is located at <code>/usr/local/directadmin/data/users/admin/httpd.conf</code> and I need to add the following 4 lines to the appropriate vhost in that file.</p><pre><code>Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/ryogasp.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ryogasp.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/ryogasp.com/fullchain.pem
</code></pre><p>Now do a graceful restart of Apache and we should have the new SSL certificate up and running.</p><pre><code>sudo service httpd graceful
</code></pre><p>You can load the site in your browser to see if it is using the new SSL certificate as well as test it at <a
href="https://www.ssllabs.com/ssltest">SSL Labs</a> to see if you received a passing grade.</p><p>Since I decided to completely remove http (port 80) from my site I also added the following redirect which could be placed in the appropriate (port 80) vhost (preferable) or in a .htaccess file (less preferred).</p><pre><code>Redirect / https://brett.batie.com/
</code></pre><p>Now, we just need to automate the certificate renewal by adding the following to a cronjob:</p><pre><code>@monthly /usr/bin/certbot renew
</code></pre><p>As you can see there were a few steps involved in this process but overall I found this easier than generating certificate requests, verifying domain ownership, receiving an email with the certificate, needing to find the entire certificate chain and then manually put all the files where they needed to go. The fact that Let&#8217;s Encrypt is free and it simplifies the process makes it a no brainer and should help our Internet become a little more secure.</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/__trashed-2/">Install a Free SSL Certificate (LetsEncrypt) on CentOS with Apache and WordPress</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/__trashed-2/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item><title>Dot File Manager</title><link>https://brett.batie.com/software-development/dot-file-manager/</link> <comments>https://brett.batie.com/software-development/dot-file-manager/#respond</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Thu, 17 Oct 2013 18:44:16 +0000</pubDate> <category><![CDATA[Scripting]]></category> <category><![CDATA[Software Development]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=654</guid> <description><![CDATA[<p>Today I finished putting together a dot file manager. This is a tool that helps manage all of the settings and configurations for multiple computers. I&#8217;m currently storing configs and settings for vim, sublime, bash, aliases, custom functions, bin files, thunar, diffuse, kupfer and the like. This tool allows building out a new computer with [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/dot-file-manager/">Dot File Manager</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>Today I finished putting together a dot file manager. This is a tool that helps manage all of the settings and configurations for multiple computers. I&#8217;m currently storing configs and settings for vim, sublime, bash, aliases, custom functions, bin files, thunar, diffuse, kupfer and the like. This tool allows building out a new computer with all of my settings very quickly and also helps me keep multiple computers/servers configs in sync. No more painful copying to usb, emailing or using scp/rsync to manually pull my settings and configs. YEAH!</p><p>The installation of this tool couldn&#8217;t be easier with a simple one line command:</p><pre class="brush: bash; gutter: false">bash &lt;(wget -nv -O - https://raw.github.com/brettbatie/dotfiles/master/bin/dotm)</pre><p>That command will download all settings/configs into a dotfiles directory and then ask if symbolic links should be created. The dot file manager tool (called dotm) can be used by anyone to store their custom configs (without being forced to use mine). I hope others find it useful, I know I have.</p><p>More detailed instructions about this tool can be viewed on my github account: <a
href="https://github.com/brettbatie/dotfiles">https://github.com/brettbatie/dotfiles</a></p><p>Below are some features that this tool currently supports. Please feel free to submit <a
href="https://github.com/brettbatie/dotfiles/issues">suggestions or issues on github</a>.</p><div
style="background-color: #ccc;"><ul><li>Handles <strong>symlinks to files in sub directories</strong> of the dot file directory. This will match the directory structure in the users home directory. So ~/dotfiles/somedir/.somefile will have a link created in ~/somedir/.somefile.</li><li><strong>Symlinks to directories</strong> in the dot files directory. Any directory name that ends in .lnk will have a corresponding symlink pointing to it from the home directory. So ~/dotfiles/somddir.lnk/ will have a symlink in ~/somdir.</li><li>Creates <strong>backups</strong> of files that exist in the users home directory. Places backup in a user defined directory (~/dotfiles/backup by default) and appends a timestamp to the filename</li><li><strong>Automatically creates symlink</strong> for all files in the dot files directory (~/dotfiles by default) and sub directories except for <a
href="https://github.com/brettbatie/dotfiles#Special-Directories">special directories</a>.</li><li><strong>Custom directory</strong> (default ~/dotfiles/custom) to put files that won&#8217;t be symlinked.</li><li><strong>Bin directory</strong> (default ~/dotfiles/bin) to put scripts that won&#8217;t be symlinked. Will be added to the path via .bashrc.</li><li><strong>Source directory</strong> (default ~/dotfiles/source) to put source files that won&#8217;t be symlinked. Will be sourced via .bashrc.</li><li>Option to <strong>ask before creating each symlink</strong>.</li><li>Option to create <strong>symlinks from a minimal list</strong>. Allowing for only some symlinks to be created on specific servers.</li><li>Updates dot files from <strong>remote repository</strong> on each run.</li><li><strong>Command line options</strong> to change default settings (remote repository, dotfile directory, etc).</li></ul></div><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/dot-file-manager/">Dot File Manager</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/dot-file-manager/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item><title>Mysqldump Specific Table From All Databases</title><link>https://brett.batie.com/software-development/mysqldump-specific-table-from-all-databases/</link> <comments>https://brett.batie.com/software-development/mysqldump-specific-table-from-all-databases/#comments</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Tue, 22 Jan 2013 21:39:30 +0000</pubDate> <category><![CDATA[Software Development]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=633</guid> <description><![CDATA[<p>I recently had a task where I needed to export a specific table that was in a few hundred different databases. However, mysqldump does not have a way to specify that a specific table should be dumped out of every database. See the supported formats below: mysqldump [options] db_name [tbl_name ...] mysqldump [options] --databases db_name [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/mysqldump-specific-table-from-all-databases/">Mysqldump Specific Table From All Databases</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>I recently had a task where I needed to export a specific table that was in a few hundred different databases. However, mysqldump does not have a way to specify that a specific table should be dumped out of every database. See the supported formats below:</p><p><code>mysqldump [options] db_name [tbl_name ...]<br
/> mysqldump [options] --databases db_name ...<br
/> mysqldump [options] --all-databases</code></p><p>I was hoping for a command like: <code>mysqldump --all-databases 'table_name'</code>.</p><p>mysqldump does have an <code>--ignore-table</code> option but in my case there were too many different tables to list and I didn&#8217;t want to go there.</p><p>My next thought was to build a quick PHP script that would loop through every database, check if the desired table exists and then mysqldump it. Before I had the chance to start on this approach I realized I could accomplish this with a one line shell command. The approach I took was the following:</p><pre class="brush: bash;">mysql -s -N -e "select TABLE_SCHEMA from information_schema.tables where TABLE_NAME='users'" | xargs -I % sh -c 'mysqldump % users | mysql -uUSERNAME -pPASSWORD -hHOST %'</pre><p>In the example above, I got a list of all databases (TABLE_SCHEMA) that contained a &#8220;users&#8221; table. I piped that output to xargs which runs mysqldump on the specific database and users table. Last I piped mysqldump to send the output to another server so that it could be imported in the same step.</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/mysqldump-specific-table-from-all-databases/">Mysqldump Specific Table From All Databases</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/mysqldump-specific-table-from-all-databases/feed/</wfw:commentRss> <slash:comments>1</slash:comments> </item> <item><title>Automatically Allocate IP Address to AWS Instances</title><link>https://brett.batie.com/software-development/automatically-allocate-ip-address-to-aws-instances/</link> <comments>https://brett.batie.com/software-development/automatically-allocate-ip-address-to-aws-instances/#respond</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Mon, 31 Dec 2012 19:48:27 +0000</pubDate> <category><![CDATA[AWS]]></category> <category><![CDATA[Software Development]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=616</guid> <description><![CDATA[<p>I recently had a task where I needed to quickly start up 50 spot instances that all required an Elastic IP (EIP) address. I initially worked out the steps in the web console and determined I needed to accomplish the following: Request 50 spot instances based on an existing AMI Allocate 50 new EIPs Associate [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/automatically-allocate-ip-address-to-aws-instances/">Automatically Allocate IP Address to AWS Instances</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>I recently had a task where I needed to quickly start up 50 spot instances that all required an Elastic IP (EIP) address. I initially worked out the steps in the web console and determined I needed to accomplish the following:</p><ol><li>Request 50 spot instances based on an existing AMI</li><li>Allocate 50 new EIPs</li><li>Associate each EIP with one of the newly running spot instances.</li></ol><p>Requesting 50 spot instances from the web console was quick and painless. However, the EIP allocation and association quickly become tiresome. As the Amazon web console only allows allocating and associating 1 EIP at a time. To repeat the following steps 50 times did not seem like a good use of time: requesting a new EIP, determining the appropriate instance ID to associate with the EIP and then assigning the EIP</p><p>Instead I decided to quickly put together a few commands to achieve the goal using the AWS API. First I issued a request for 50 instances with a command like the following</p><pre class="brush: bash;">ec2-request-spot-instances ami-1d2b34e5 --price .15 -n50 -s subnet-c1f234ae -t m2.4xlarge --kernel aki-88aa75e1</pre><p>Of course the above command will need to be modified for each specific case. The specific AMI ID, max bid price, number of instances, subnet, instance type and kernel will all need their respective values modified.</p><p>Then I waited until all of the instances were running with a command like the following:</p><pre class="brush: bash;">ec2-describe-instances --filter "instance-state-code=16" | grep 'spot' | grep -E '10\.0\.1\.[0-9]{1,3}\s+vpc' | wc -l</pre><p>The above command lists all of the running instances (instance-stat-code=16), limits it to only spot instances and then limits the output to a specific VPC that has an internal address in the 10.0.1.* range.</p><p>Once the above command displayed 50 I was ready to start allocating and associating IP addresses. I accomplished this with a combinations of commands. I needed to use <a
href="http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-AllocateAddress.html">ec2-allocate-address</a> to request a new EIP and <a
href="http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-DescribeInstances.html">ec2-describe-instances</a> to get a list of instances that need an EIP. Last, <a
href="http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-AssociateAddress.html">ec2-associate-address</a> needed to be used to associate the new EIP with a specific instance ID. The command to accomplish this looked like the following:</p><pre class="brush: bash;">for((i=0;i&lt;50;i++)); do ec2-associate-address -a `ec2-allocate-address -d vpc | cut -f5` -i `ec2-describe-instances --filter "instance-state-code=16" | grep 'spot' | grep -E 'monitoring-[a-Z]+\s+10\.0\.1' | cut -f2 | head -n1`; done</pre><p>The above runs the <a
href="http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-AssociateAddress.html">ec2-associate-address</a> command 50 times. It then runs two sub commands one which requests a new EIP address in the vpc (ec2-allocate-address -d vpc) and one which gets the next running spot instance that does not have an EIP (ec2-describe-instances &#8211;filter &#8220;instance-state-code=16&#8243;&#8230;).</p><p>Last, the new EIPs can be listed with a command like the following:</p><pre class="brush: bash;">ec2-describe-instances | grep 'spot' | cut -f17</pre><p>This worked beautifully for my goal of quickly firing up 50 spot instances and assigning an EIP address. As always, there is room for improvement. If automating something like this on a regular basis, I would suggest taking the one line command and doing more validation on the output of each command. The above assumes that everything is in a good state and that no issues occur in requesting or assigning the EIPs.</p><p>Let me know if you find this useful!</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/automatically-allocate-ip-address-to-aws-instances/">Automatically Allocate IP Address to AWS Instances</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/automatically-allocate-ip-address-to-aws-instances/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item><title>HydraIRC / Freenode &#8211; Auto Connect, Identify, Join</title><link>https://brett.batie.com/software-development/hydrairc-freenode-auto-connect-identify-join/</link> <comments>https://brett.batie.com/software-development/hydrairc-freenode-auto-connect-identify-join/#comments</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Thu, 28 Jun 2012 00:06:29 +0000</pubDate> <category><![CDATA[Software Development]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=550</guid> <description><![CDATA[<p>I often join IRC channels where other developers hang out. I&#8217;ve found this to be very beneficial in keeping up to speed with changing technologies. I generally stick to two servers Freenode and OFTC. Freenode is by far my favorite as it seems to be the standard server for other developers to join and create [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/hydrairc-freenode-auto-connect-identify-join/">HydraIRC / Freenode &#8211; Auto Connect, Identify, Join</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>I often join <a
title="IRC" href="http://en.wikipedia.org/wiki/Internet_Relay_Chat">IRC</a> channels where other developers hang out. I&#8217;ve found this to be very beneficial in keeping up to speed with changing technologies. I generally stick to two servers <a
title="freenode" href="http://freenode.net/">Freenode</a> and <a
title="OFTC" href="http://www.oftc.net/oftc/">OFTC</a>. Freenode is by far my favorite as it seems to be the standard server for other developers to join and create channels about their products.</p><p>Over the years, I&#8217;ve found this extremely beneficial. I can generally get immediate responses about bugs, feature requests and up-coming enhancements in software I&#8217;m using. Also, if I need a feature added to a piece of software I can code it up and send it straight to the developers. I do highly recommend all developers participate in IRC channels as a huge amount of quality information is being shared and it&#8217;s easy to find someone who is more specialized in a specific software development topic.</p><p>That said, I&#8217;ve used <a
title="HydraIRC" href="http://www.hydrairc.com/">HydraIRC</a> for the last year and it&#8217;s been growing on me. I was previously using <a
title="mIRC" href="http://www.mirc.com/">mIRC</a> which is also a good product.</p><h3>Automated Connection To Two Servers</h3><p>The very first task I had after installing HydraIRC was to set up an automated script to connect to my favorite servers and favorite channels. The scripting piece was quite simple and consisted of the following steps:</p><ol><li>Open HydraIRC and click<strong> Options &#8211;&gt; Prefs</strong>, then click <strong>Scripts</strong></li><li>In the Command Profiles box type the name &#8220;<strong>OnStartup</strong>&#8221; (this is case-sensitive)</li><li>In the Commands window type the commands to join the server. For me this was:</li></ol><pre class="brush:text">/server irc.freenode.net:6667
/newserver irc.oftc.net:6667</pre><p><img
data-attachment-id="579" data-permalink="https://brett.batie.com/software-development/hydrairc-freenode-auto-connect-identify-join/attachment/hydrairc_preferences_2012-06-27_16-13-42/" data-orig-file="https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42.png?fit=658%2C457&amp;ssl=1" data-orig-size="658,457" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}" data-image-title="HydraIRC Preferences On Startup Script" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42.png?fit=350%2C243&amp;ssl=1" data-large-file="https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42.png?fit=600%2C416&amp;ssl=1" loading="lazy" class="alignnone size-large wp-image-579" title="HydraIRC Preferences On Startup Script" src="https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42-600x416.png?resize=600%2C416" alt="HydraIRC Preferences On Startup Script" width="600" height="416" srcset="https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42.png?resize=600%2C416&amp;ssl=1 600w, https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42.png?resize=150%2C104&amp;ssl=1 150w, https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42.png?resize=350%2C243&amp;ssl=1 350w, https://i0.wp.com/brett.batie.com/wp-content/uploads/2012/06/HydraIRC_Preferences_2012-06-27_16-13-42.png?w=658&amp;ssl=1 658w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" /></p><p>With those two command in-place HydraIRC will automatically connect to both Freenode and OFTC when I first start the application. Sweet! As a developer doesn&#8217;t it always feel good to save a little time?</p><h3>Join Channels After Connect</h3><p>Now, we just need to create two more scripts to connect to specific channels after we join the servers. This consists of the following (similar steps):</p><ol><li>Open HydraIRC and click <strong>Options &#8211;&gt; Prefs</strong>, then click <strong>Scripts</strong></li><li>In the Command Profiles box type the name &#8220;<strong>irc.freenode.net_OnLoggedIn</strong>&#8221; (again case-sensitive) or <strong>irc.oftc.net_OnLoggedIn</strong>. Basically, the format is <strong>serverName_OnLoggedIn</strong>.</li><li>In the commands window type:</li></ol><pre class="brush: text;">/join #php
/join #java
/join #html
/join #apache
(and whatever other channels you like)</pre><h3 class="brush: text;">Identify After Connect But Before Channel Join</h3><p>With the above in place OFTC was working perfectly but Freenode was telling me:</p><blockquote><p>Cannot join channel &#8211; you need to be identified with the service</p></blockquote><p>To resolve this, the command: <strong>/msg NickServ identify </strong> needs be sent to the server (it&#8217;s like logging in).</p><p>The trouble is, doing the commands in the following order does not work:</p><pre class="brush: text;">/msg NickServ identify myPass
/join #php
/join #java</pre><p>This is because HydraIRC does not wait for the response to the <strong>identify</strong> command but instead immediately runs the next <strong>join</strong> command. This means HydraIRC tries to join the channels before the server has finished identifying my nick. Darn!</p><p>With a little more thought I did come to a solution that has worked very well. Basically, I needed to execute:</p><pre class="brush: text;">/msg NickServ identify myPass
(pause)
/join #php
/join #java</pre><p>But, wait, HydraIRC does not have a (pause). The work around was to instead use ping which I&#8217;ve seen used for pausing in other <a
href="http://stackoverflow.com/questions/735285/how-to-wait-in-a-batch-script">scripts that do not support a true pause/wait</a> (i.e. Dos Batch Script).</p><p>I modified my irc.freenode.net_OnLoggedIn profile to have the following commands and now when I fire up HydraIRC. I automatically connect to my favorite servers, log into nickserv, and join my favorite channels, all while sipping on a little coffee!</p><pre class="brush: text;">/msg nickserv identify myPass
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/ping localhost
/join #php</pre><p>I love automation!</p><p><strong>What other handy scripts do you use in your IRC client? Please do share!</strong></p><p>&nbsp;</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/hydrairc-freenode-auto-connect-identify-join/">HydraIRC / Freenode &#8211; Auto Connect, Identify, Join</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/hydrairc-freenode-auto-connect-identify-join/feed/</wfw:commentRss> <slash:comments>7</slash:comments> </item> <item><title>Disable / Enable Symantec Protection via Command Line</title><link>https://brett.batie.com/scripting/disable-enable-symantec-protection-via-command-line/</link> <comments>https://brett.batie.com/scripting/disable-enable-symantec-protection-via-command-line/#respond</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Sat, 05 Nov 2011 22:16:26 +0000</pubDate> <category><![CDATA[Scripting]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=669</guid> <description><![CDATA[<p>On occasion I need to run some software tests where Symantec gets in the way. So I put together a simple batch file that will stop and start Symantec. Just add the following commands to a symantec.bat file. Then you can run the commmands symantec start or symantec stop. if "%1" == "stop" ( echo [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/scripting/disable-enable-symantec-protection-via-command-line/">Disable / Enable Symantec Protection via Command Line</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>On occasion I need to run some software tests where Symantec gets in the way. So I put together a simple batch file that will stop and start Symantec. Just add the following commands to a symantec.bat file. Then you can run the commmands <code>symantec start</code> or <code>symantec stop</code>.</p><pre class="brush: bash;">if "%1" == "stop" (
        echo "stopping"
	net stop "Symantec Endpoint Protection"
	net stop "Symantec Event Manager"
	net stop "Symantec Settings Manager"
	net stop "Symantec Network Access Control"
	"c:\Program Files\Symantec\Symantec Endpoint Protection\smc.exe" -stop
) else (
	if "%1" == "start" (
		echo "starting"
		net start "Symantec Endpoint Protection"
		net start "Symantec Event Manager"
		net start "Symantec Settings Manager"
		net start "Symantec Network Access Control"
		"c:\Program Files\Symantec\Symantec Endpoint Protection\smc.exe" -start
	) else (
		if "%1" == "stop_ntp" (
			echo "not supported"
		)else(
			echo "usage: symantic start|stop"
		)
	)
)</pre><p>The post <a
rel="nofollow" href="https://brett.batie.com/scripting/disable-enable-symantec-protection-via-command-line/">Disable / Enable Symantec Protection via Command Line</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/scripting/disable-enable-symantec-protection-via-command-line/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item><title>Mercurial Hook for Syntax Checking (PHP)</title><link>https://brett.batie.com/software-development/mercurial-hook-for-syntax-checking-php/</link> <comments>https://brett.batie.com/software-development/mercurial-hook-for-syntax-checking-php/#comments</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Fri, 08 Oct 2010 17:29:59 +0000</pubDate> <category><![CDATA[Software Development]]></category> <category><![CDATA[hook]]></category> <category><![CDATA[Mercurial]]></category> <category><![CDATA[php]]></category> <category><![CDATA[Python]]></category> <category><![CDATA[SCM]]></category> <category><![CDATA[Source Control]]></category> <category><![CDATA[Syntax]]></category> <guid
isPermaLink="false">http://brett.batie.com/?p=526</guid> <description><![CDATA[<p>For those unfamiliar with Mercurial, it is an awesome Source Control Management (SCM) tool. One of my favorite features of Mercurial is that the repositories are distributed which allows each machine to have a full copy of the project&#8217;s history. Being distributed has many advantages such as faster committing, branching, tagging, merging, etc. since it [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/mercurial-hook-for-syntax-checking-php/">Mercurial Hook for Syntax Checking (PHP)</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>For those unfamiliar with <a
href="http://mercurial.selenic.com/">Mercurial,</a> it is an awesome <a
href="http://en.wikipedia.org/wiki/Revision_control">Source Control Management (SCM)</a> tool. One of my favorite features of Mercurial is that the repositories are distributed which allows each machine to have a full copy of the project&#8217;s history. Being distributed has many advantages such as faster committing, branching, tagging, merging, etc. since it is all done locally. Of course this setup also creates a backup of the repository each time an engineer clones a repository. There are a lot of benefits to using Mercurial, but that is not the focus of this post.</p><p>In this article, I am going to discuss how to setup a Mercurial hook to handle checking the syntax of files. Specifically, the hook will be setup to check the syntax of PHP files. This is beneficial as it will prevent users from adding files to Mercurial that are invalid and will keep the repository clean. Better yet, when dealing with a repository for a live website, it will prevent invalid files from ever being added to the live site.</p><h4>The Pretxnchangegroup Event</h4><p>Mercurial hooks are programs that Mercurial will execute during specific events. Ideally, a hook such as checking syntax would happen just before a commit is being made (the <a
href="http://hgbook.red-bean.com/read/handling-repository-events-with-hooks.html#sec:hook:precommit">precommit</a> event). Since Mercurial is distributed, this would require each client to install and setup the hook. This may work for some, but it does require more work and can cause issues if the hook is not setup correctly on each machine.</p><p>There is a better solution for environments that have a central repository for everyone to push their changes to. Basically, the hook can be setup on the <a
href="http://hgbook.red-bean.com/read/handling-repository-events-with-hooks.html#sec:hook:pretxnchangegroup">pretxnchangegroup</a> event. This event is executed just before a changeset (group of commits) is added to a remote repository (during a push).</p><p>To setup a hook on the pretxnchangegroup event, the syntax checking will need to build a list of every file that was changed for each changeset and then check the syntax on the latest version of each file. If there is a syntax error, the hook can exit with the appropriate status code to prevent the changesets from being added to the central repository.</p><p>When using the pretxnchangegroup event, each machine will be able to commit changes with files that have syntax errors. However, when trying to push the files to the central server, the changesets will be rejected until the syntax errors have been fixed.</p><h4>In Process vs. External Hooks</h4><p>With Mercurial, there are two types of hooks: an in-process and an external hook. An in-process hook is a Python module that is loaded at the time the Mercurial starts. An external hook can use any programming language that is supported by the OS.</p><p>These are advantages to using both an in-process and an external hook. An external hook is most beneficial when the code is already written in another language or the developers are more familiar with a language other than Python. An in-process hook has some nice advantages as it allows the developer access to the internals of Mercurial. It also gives the ability to display a message to the user when making a change in the repository.</p><h4>External Hook Using a Shell Script</h4><p>In order to show how Mercurial hooks work, I have developed both an external and in-process hook to check the syntax of PHP files. Below is the source code for an external hook. This hook is a bash script that I named php_syntax.sh.</p><pre class="brush: bash;">#!/usr/local/bin/bash
echo "STARTING PHP SYNTAX CHECK..."
# create a random temp file
temp_file=`/usr/bin/mktemp -t php_syntax_files`
# get all modified files and remove duplicate's
#note: use file_mods,file_adds instead
hg log -r $HG_NODE:tip --template "{files}\n" | sort | uniq &gt; $temp_file
# Walk through each line
#for line in "$temp_file"; do
for line in $(&lt; $temp_file); do
	# Make sure it is a php file
	if [ `echo $line |  grep -Ei ".+\.(php)|(php4)|(php5)$"` ]
	then
		# create a random temp file
		php_file=`/usr/bin/mktemp -t php_syntax_check`
		# save the contents of this file (latest commit) to the temp file
		hg cat -r tip $line &gt; $php_file
		# check the syntax
		php_syntax_output=`/usr/local/bin/php -l -d display_errors=1 -d error_reporting=4 -d html_errors=0 &lt; $php_file`;
		# remove the temp file
		rm -f $php_file;
		test_syntax=`echo $php_syntax_output | grep "Parse error"`
		if [ "$test_syntax" ];then
			exit 1;
		fi
	fi
done
rm -f "$temp_file"</pre><p>The above code will check the latest version of each file that is being changed when pushing to the server. It will only check files that have an extension of PHP, PHP4 or PHP5. The content of each file that is being pushed to the server is then stored in a temporary file and passed to PHP to check the syntax. If the syntax check fails, the program returns a 1 for failure which causes the entire push to fail so that no changes are pushed to the server. If there are no syntax errors, the hook exits normally and continues to push the files to the server.</p><p>In order to install the above hook in Mercurial, simply add the following 2 lines to the <a
href="http://www.selenic.com/mercurial/hgrc.5.html">.hgrc</a> and/or the hgweb.config file.</p><pre class="brush: text;">[hooks]
pretxnchangegroup.syntax_check = /usr/home/mercurial/php_syntax.sh</pre><p>Of course the path in the above line needs to be updated to where the bash script was saved. The bash script will most likely need to be updated to contain the correct paths as well.</p><p>With all of the above in place the following message will be displayed to the user when trying to push a file that has a syntax error:</p><pre class="brush: text;">running hook pretxnchangegroup.syntax_check: /usr/home/code.softwareprojects.com/php_syntax.sh
transaction abort!
rollback completed
abort: pretxnchangegroup.syntax_check hook exited with status 127
warning: commit.autopush hook exited with status 1</pre><h4>In-Process Hook Using Python</h4><p>The major flaw with using the above shell script is it does not allow us to display a nice informative error to the user when their push fails due to a syntax error. This is one advantage of using a Python in-process hook instead. I have written very similar logic in Python which can be seen below:</p><pre class="brush: python;">import subprocess,os,re
import os.path
from mercurial import ui
from random import randrange
from time import time
def check(ui, repo, hooktype, node, **kwargs):
    #initialize variables
    error = ""
    fileSet = set()
    # Loop through each changeset being added to the repository
    for change_id in xrange(repo[node].rev(), len(repo)):
        # Loop through each file for the current changeset
        for currentFile in repo[change_id].files():
            # Only Check PHP Files
            if re.match('.*\.(php)|(php4)|(php5)',currentFile):
                # Build a unique list of each file that has changed
                fileSet.add(currentFile)
    # Loop through each file that has changed
    for currentFile in fileSet:
        # Grab the latest version of the current file in the changeset
        ctx = repo['tip']
        # Do not check the file if it is being deleted
        if currentFile not in ctx:
            continue;
        # Generate a unique temporary file name using random number and timestamp
        temp_file = '/tmp/php_syntax_check.%s%s' % (randrange(0,100000),int(time()))
        # Open the temp file for writing
        f = open(temp_file,'w')
        # Get the file context
        fctx = ctx[currentFile]
        # Save the contents of the current file to the temp file
        f.write(fctx.data())
        # Close the temp file
        f.close()
        # Check the syntax of the current/temp file
        proc = subprocess.Popen('/usr/local/bin/php-cgi -l -d display_errors=1 -d error_reporting=4 -d html_errors=0 &lt; %s' % temp_file, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        # Retrieve the output of the syntax check
        out,err = proc.communicate()
        # Check for syntax errors and save them
        if 'Parse error' in out:
            error += "%s%s\n" % (out,currentFile)</pre><pre class="brush: python;">        os.unlink(temp_file);
    # Check if an error occured in any of the files that were changed
    if error != "":
        # Display a message to the user about each file that contained a syntax error
        ui.warn("******************************************************" +
            error +
            "******************************************************\n")
        # Reject the changesets
        return 1
    # Accept the changesets
    return 0</pre><p>This code is very similar in functionality to the shell script. It first builds a list of all of the files being pushed that have a PHP, PHP4 or PHP5 extension. Then it obtains the contents of each file that is being pushed and stores each file in a random temporary file. It checks the syntax of each file and then cancels the push if there is one or more files with invalid syntax.</p><p>Since this is an in-process hook, it is able to display a nice message to the user about why the push was not allowed. This hook is also set up to check every single file and display a message about every file that has a syntax error. This allows the hook to display a message to the user such as the following:</p><pre class="brush: text;">******************************************************
Parse error: syntax error, unexpected T_ECHO in - on line 3
Errors parsing -
afile_test.php
Parse error: syntax error, unexpected '@' in - on line 15
Errors parsing -
anotherfile_test.php
******************************************************</pre><p>In order to setup this hook with Mercurial, save the above Python code in a file that is on the <a
href="http://docs.python.org/tutorial/modules.html#the-module-search-path">PYTHONPATH</a>. Then add the following two lines of code to the .hgrc and/or the hgweb.config file.</p><pre class="brush: text;">[hooks]
pretxnchangegroup.syntax_check = python:php_syntax.check</pre><p>It is important to point out that the text on the right half of the equals sign tells Mercurial what to load. In this example, it says use Python, look for a file named php_syntax.py and call the function check.</p><p>Also, Mercurial will need to be restarted after setting up the above hook or after each time the hook is modified. This is because the in-process hook is loaded when Mercurial/Python is first started.</p><h4>Conclusion</h4><p>Mercurial is a great SCM tool and can be very powerful when combined with either in-process or external hooks. In-process hooks provide much more control and are the preferred method in most cases. The examples above are just an introduction to Mercurial hooks and they can easily be modified for specific environments or checking the syntax of other languages.</p><p>Please leave a comment if you have found this code useful or share your experiences with Merucial and hooks.</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/mercurial-hook-for-syntax-checking-php/">Mercurial Hook for Syntax Checking (PHP)</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/mercurial-hook-for-syntax-checking-php/feed/</wfw:commentRss> <slash:comments>4</slash:comments> </item> <item><title>Java Live Messenger (MSN) Robot</title><link>https://brett.batie.com/software-development/java-live-messenger-msn-robot/</link> <comments>https://brett.batie.com/software-development/java-live-messenger-msn-robot/#comments</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Thu, 02 Sep 2010 14:34:00 +0000</pubDate> <category><![CDATA[Software Development]]></category> <guid
isPermaLink="false">http://brett.batie.com/software-development/java-live-messenger-msn-robot/</guid> <description><![CDATA[<p>I recently had a project to setup an Instant Messenger Robot for Windows Live Messenger. A IM robot can have many purposes such as: Keeping track of when contacts are online/offline and when they were last seen. Broadcasting a message to all contacts. Automatically answering common questions. Notifying contacts about new events. A newer site [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/java-live-messenger-msn-robot/">Java Live Messenger (MSN) Robot</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>I recently had a project to setup an Instant Messenger Robot for Windows Live Messenger. A IM robot can have many purposes such as:</p><ul><li>Keeping track of when contacts are online/offline and when they were last seen.</li><li>Broadcasting a message to all contacts.</li><li>Automatically answering common questions.</li><li>Notifying contacts about new events. A newer site http://notify.me has a nice IM Robot that notifies you when a RSS feed is updated. This works well in conjuction with sites like craigslist.</li><li>Keeping track of code snippets</li><li>Checking the weather</li><li>Checking server status</li></ul><p>An IM robot can be setup to automate just about any task.</p><h4>What IM Library to Use</h4><p>Setting up a IM robot can be a bit of work especially if starting from scratch. There are a lot of libraries out there that can be used to help simplify the process. The trouble is a lot of libraries are not kept up to date and fail to work as IM protocols change.</p><p>I did some digging and found a library that would provide a good foundation to build a IM Robot that can do just about anything. I saw implementations in PHP, C, Java, Perl and Python. After some testing I concluded the <a
href="http://sourceforge.net/apps/trac/java-jml">Java MSN Library</a> would be a very good fit.</p><h4>How To Use It</h4><p>Using this library with java is pretty straight forward. First, the library must be added to the classpath. The step to take to complete this will depend on how your developing your java code. The most basic method to add a library to your classpath is to do this at run time with a command such as:</p><pre class="brush: java;">java -classpath MyLibrary.jar MyPackage.MyClass</pre><p>A better approach would be to setup the classpath in a <a
title="Manifest File" href="http://download.oracle.com/javase/tutorial/deployment/jar/manifestindex.html">manifest file</a>. The manifest file is then placed inside the jar file and tells the executable jar where to look for the libraries. This manifest file should look something like the following (note the class-path on line 5):</p><pre class="brush: java;">Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.1
Created-By: 14.3-b01 (Sun Microsystems Inc.)
Main-Class: imstatus.Main
Class-Path: lib/jml-1.0b4-full.jar lib/httpcore-4.0.1.jar lib/mysql-co
 nnector-java-5.1.6-bin.jar
X-COMMENT: Main-Class will be added automatically by build</pre><p>This is setup so that the 3 required libraries are in the lib folder. These 3 libraries are needed for setting up an IM robot and can be downloaded from the following locations:</p><ul><li><a
href="http://sourceforge.net/projects/java-jml/files/java-jml/jml-1.0b4/jml-1.0b4-full.jar/download">jml-1.0b4-full.jar</a></li><li><a
href="http://apache.imghat.com//httpcomponents/httpclient/binary/httpcomponents-client-4.0.1-bin.zip">httpcore-4.0.1.jar</a></li><li><a
href="http://dev.mysql.com/downloads/connector/j/">mysql-connector-java-5.1.6-bin.jar</a></li></ul><p>Now that the libraries are setup we can begin to use them.</p><h4>Developing the IM Robot Code</h4><p>There are a few examples of using the Java MSN Library on the <a
href="http://sourceforge.net/apps/trac/java-jml">main page</a>. However, they are a tad confusing as it creates a new BasicMessenger class. This is confusing as the library already has a BasicMessenger class which is abstract. The library also has a SimpleMessenger class which is a subclass of BasicMessenger. This class appears to be the correct implementation that we would want to use to create a new IM Robot. However, the original authors made the constructor protected so that we cannot instantiate the class outside of the original package. Since we want a simple way to create an IM Robot I have modified the original source code to have a public constructor for the SimpleMessenger class. This new package can be downloaded from <a
href="http://softwareprojects.com/files/jml-1.0b4-full.jar">our server</a>.</p><p>With this new package we can very easily create a new IM Robot with the following two lines of code (make sure to replace yourLogin and yourPassword):</p><pre class="brush: java;">SimpleMessenger messenger = new SimpleMessenger(Email.parseStr("yourLogin@msn.com"), "yourPassword");
messenger.login();</pre><p>With that code in our main funtion we can run it and test that the Robot automatically logs into Windows Live Messenger.</p><p>Of course, that code just logs the Robot into Windows Live Messenger. The next step is to setup the robot to do something interesting. This is one feature that is very nice about the Java MSN Library as it has listeners for many different events. For example we can detect when the robot has finished logging in with the following:</p><pre class="brush: java;">messenger.addListener(new MsnAdapter() {
	// Setup the login completed event
	@Override
	public void loginCompleted(MsnMessenger messenger) {
		MsnOwner owner = messenger.getOwner();
		owner.setInitStatus(MsnUserStatus.ONLINE);
		owner.setStatus(MsnUserStatus.ONLINE);
		// Setup the contact list event
		messenger.addContactListListener(new ContactListAdapter());
	}
});</pre><p>Then we can take this a step further and detect when a status changes for one of the robots contacts with something like the following:</p><pre class="brush: java;">messenger.addListener(new MsnAdapter() {
	// Setup the login completed event
	@Override
	public void loginCompleted(MsnMessenger messenger) {
		MsnOwner owner = messenger.getOwner();
		owner.setInitStatus(MsnUserStatus.ONLINE);
		owner.setStatus(MsnUserStatus.ONLINE);
		// Setup the contact list event
		messenger.addContactListListener(new ContactListAdapter());
	}
});</pre><p>The above code will detect when the robot has finished logging in and then setup a new listener to detect when a contacts status has changed. The new listener invokes the ContactListAdapter class when a status has changed. This contactListAdapter class is setup as followes:</p><pre class="brush: java;">class ContactListAdapter extends MsnContactListAdapter {
	@Override
	public void contactStatusChanged(MsnMessenger messenger, MsnContact contact) {
		System.out.println(contact.getEmail()+" is currently "+contact.getStatus());
		// Can add code here to store the status in a database
	}
}</pre><p>We can still take this a step further and setup the robot to handle automatically adding contacts when a contact requests it. This logic can be added to the ContactListAdapter class with something like the following:</p><pre class="brush: java;">class ContactListAdapter extends MsnContactListAdapter {
	@Override
	public void contactListSyncCompleted(MsnMessenger messenger) {
			MsnContact[] contacts = messenger.getContactList().getContactsInList(MsnList.AL);
			for (int i = 0; i &lt; contacts.length; i++) {
				contactStatusChanged(messenger,contacts[i]);
			}
	}
	@Override
	public void contactAddedMe(MsnMessenger messenger, MsnContact contact) {
		messenger.addFriend(contact.getEmail(), contact.getDisplayName());
	}
	@Override
	public void contactAddedMe(MsnMessenger messenger, MsnContactPending[] pending){
		for(int i=0; i&lt;pending.length; i++){
			messenger.addFriend(pending[i].getEmail(), pending[i].getDisplayName());
		}
	}
	@Override
	public void contactStatusChanged(MsnMessenger messenger, MsnContact contact) {
		System.out.println(contact.getEmail()+" is currently "+contact.getStatus());
		// Can add code here to store the status in a database
	}
}</pre><p>There you have it! Put all of the above code together and you will have a robot that knows how to automatically add contacts and keep track of when a contact&#8217;s status changes.</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/java-live-messenger-msn-robot/">Java Live Messenger (MSN) Robot</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/java-live-messenger-msn-robot/feed/</wfw:commentRss> <slash:comments>1</slash:comments> </item> <item><title>UltraMon Breaks After Remote Desktop Connection (RDP)</title><link>https://brett.batie.com/software-development/ultramon-breaks-after-remote-desktop-connection-rdp/</link> <comments>https://brett.batie.com/software-development/ultramon-breaks-after-remote-desktop-connection-rdp/#comments</comments> <dc:creator><![CDATA[Brett]]></dc:creator> <pubDate>Tue, 06 Apr 2010 19:42:00 +0000</pubDate> <category><![CDATA[Software Development]]></category> <guid
isPermaLink="false">http://brett.batie.com/software-development/ultramon-breaks-after-remote-desktop-connection-rdp/</guid> <description><![CDATA[<p>I use the application UltraMon to help manage my multiple monitor setup. Overall this application is awesome as it makes moving applications between monitors a breeze and supports a separate task bar on each monitor, among other things. However, I have had this issue for a while where UltraMon will not move applications between monitors [&#8230;]</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/ultramon-breaks-after-remote-desktop-connection-rdp/">UltraMon Breaks After Remote Desktop Connection (RDP)</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></description> <content:encoded><![CDATA[<p></p><p>I use the application <a
href="http://www.realtimesoft.com/ultramon/">UltraMon</a> to help manage my multiple monitor setup. Overall this application is awesome as it makes moving applications between monitors a breeze and supports a separate task bar on each monitor, among other things.</p><p>However, I have had this issue for a while where UltraMon will not move applications between monitors after a Remote Desktop Connection has been established, instead UltraMon acts as if there is only one monitor.</p><p>In the past the only solution I had for this issue was to restart my computer. This is not an ideal solution for me as I&#8217;m often multi-tasking and running many applications at the same time.</p><p>In order to reboot I have to close down each application, save my work, reboot, and then start up every application again after the reboot. This is not a major amount of time but it does add up if I have to do it on a regular basis. Plus, I am one who likes to optimize everything to achieve as much efficiency as humanly possible in a given day.</p><p>So, I spent a few minutes and fiddled with UltraMon and found a way to fix this RDP flaw without requiring a reboot. The steps are as follows:</p><ol><li>Close UltraMon<a
rel="lightbox" href="https://i0.wp.com/brett.batie.com/wp-content/uploads/sshot201004055.png"><img
loading="lazy" style="display: inline; margin-left: 0px; margin-right: 0px; border: 0px initial initial;" title="sshot-2010-04-05-[5]" src="https://i0.wp.com/brett.batie.com/wp-content/uploads/sshot201004055_thumb.png?resize=121%2C130" border="0" alt="sshot-2010-04-05-[5]" width="121" height="130" align="right" data-recalc-dims="1" /></a></li><li>Right click on your desktop and select <strong>Screen Resolution</strong></li><li>Disable the monitor that the application cannot be moved to and click apply (screenshot on right). Repeat for all monitors that are suffering from this issue.</li><li>Start UltraMon</li><li>Enable all monitors that were disabled in step 3.</li></ol><p>After completing the above steps UltraMon will be as good as new.</p><p>I have only tested this on Windows 7 so let me know if this works on other Windows versions as well.</p><p>Anyone up for automating the above steps? If I get a break I might try and tackle it.</p><p>The post <a
rel="nofollow" href="https://brett.batie.com/software-development/ultramon-breaks-after-remote-desktop-connection-rdp/">UltraMon Breaks After Remote Desktop Connection (RDP)</a> appeared first on <a
rel="nofollow" href="https://brett.batie.com">Brett Batie</a>.</p> ]]></content:encoded> <wfw:commentRss>https://brett.batie.com/software-development/ultramon-breaks-after-remote-desktop-connection-rdp/feed/</wfw:commentRss> <slash:comments>1</slash:comments> </item> </channel> </rss>