<?xml version="1.0" encoding="utf-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><atom:link href="http://rmwpublishing.net.au/RSSRetrieve.aspx?ID=3903&amp;Type=RSS20" rel="self" type="application/rss+xml" /><title>Labs</title><description>Just like most web developers we like to play with our code. Sometimes it's an aspect of a project that didn't make the final cut, sometimes we play around to see what we can do. Here are some of our experiments.</description><link>http://rmwpublishing.net.au/</link><lastBuildDate>Sun, 22 Nov 2020 19:49:50 GMT</lastBuildDate><docs>http://backend.userland.com/rss</docs><generator>RSS.NET: http://www.rssdotnet.com/</generator><item><title>ScanEnter - Event Ticketing System</title><description>ScanEnter is an event ticketing system specifically developed to operate without the need for an internet connection (no closed source cloud database - no privacy loss). This solution allows you to run an event your way - you control the system front-to-back. From the mass mailout of QR coded PDFs to the use of a private iOS app to allow scanning without the need to expensive add on hardware - just use the built in camera (works on basic Apple iPods / iPhones) with multi-device syncing to instantly prevent fraud.
ScanEnter allows for multiple ticket types including complimentary free entry.
Get live reports on event attendance and allocation exhaustion.
Pricing
As a standalone product there are no hidden per-ticket overheads that, when combined, are a huge expense to your events profitability. On a high-ticket price event, a small percentage can quickly add up. Your one-time software purchase gives you full access to the node.js source code, allowing you to customise your "front of house" entry system to work with your own business logic and work flow.
If you're interesting in using ScanEnter to make ticketing work for you, contact us.
ScanEnter - the name
Why "ScanEnter" - well it's as simple as that; 'scan' the barcoded ticket, 'enter' the event! Done. Easy! Events
ScanEnter has had great success at the following events. Quicker Entry = Shorter Queues = Happier Attendees. 2015 Tough Dog Tuff Truck Challenge 2015 ULTRA4 Racing Australia King of the Hunter</description><link>http://rmwpublishing.net.au/RSSRetrieve.aspx?ID=3903&amp;A=Link&amp;ObjectID=688784&amp;ObjectType=56&amp;O=http%253a%252f%252frmwpublishing.net.au%252flabs-1%252fscanenter-event-ticketing</link><guid isPermaLink="true">http://rmwpublishing.net.au/labs-1/scanenter-event-ticketing</guid><pubDate>Mon, 04 Apr 2016 14:00:00 GMT</pubDate><content:encoded><![CDATA[<p><img src="http://rmwpublishing.net.au/blog/files/labs/scaneter-tickets-iphone.jpg" alt="Apple iOS iPhone Event Ticket Scanning" /></p>
<p>ScanEnter is an <strong>event ticketing system</strong> specifically developed to operate without the need for an internet connection (no closed source cloud database - no privacy loss). This solution allows you to run an event your way - you control the system front-to-back. From the mass mailout of QR coded PDFs to the use of a private iOS app to allow scanning without the need to expensive add on hardware - just use the built in camera (works on basic Apple iPods / iPhones) with multi-device syncing to instantly prevent fraud.</p>
<p>ScanEnter allows for multiple ticket types including complimentary free entry.</p>
<p>Get live reports on event attendance and allocation exhaustion.</p>
<h3>Pricing</h3>
<p>As a standalone product there are no hidden per-ticket overheads that, when combined, are a huge expense to your events profitability. On a high-ticket price event, a small percentage can quickly add up. Your one-time software purchase gives you full access to the node.js source code, allowing you to customise your "front of house" entry system to work with your own business logic and work flow.</p>
<p>If you're interesting in using ScanEnter to make ticketing work for you, <a href="/contact/">contact us</a>.</p>
<h3>ScanEnter - the name</h3>
<p>Why "ScanEnter" - well it's as simple as that; 'scan' the barcoded ticket, 'enter' the event! Done. Easy!</p>
<p><img src="http://rmwpublishing.net.au/blog/files/labs/scanenter-ticket-sample.jpg" alt="ScanEnter ticket sample" longdesc="Multiple tickets on a single printed page is better for the environment" /></p>
<h3>Events</h3>
<p>ScanEnter has had great success at the following events. Quicker Entry = Shorter Queues = Happier Attendees.</p>
<ol>
    <li>2015 <a href="https://tufftruck.com.au/">Tough Dog Tuff Truck Challenge</a></li>
    <li>2015 <a href="https://ultra4racing.com.au/">ULTRA4 Racing Australia</a> King of the Hunter</li>
</ol>]]></content:encoded></item><item><title>Comment Analysis</title><description>So I was reading my gMail just the other day (what's new about that? It seems I'm always ready email) and I was distracted by a WebClip (the updates feed by a few selected feeds at the top of my mail plan) about people angry over a Chaser skit I saw the night before. I clicked thought and read about how people had got their feathers ruffled over an "anti-ad" against the Make a Wish Foundation. I wasn't surprised, this is something the ABC and Chaser crew must be used to by now. I'm actually starting to think that the organisations complain deliberately and publicly just to get some free PR out of it.I noticed that the article was only written just after an hour ago. I read the article and then noticed that the comments were already closed. I thought "that's strange". Most comment systems close after a few week or months to stop old debates being rehashed as after a certain period of time it become irrelevant.But within a few hours is abnormal. So I had a look at the number of responses and there was an amazing 225 comments already. This must be the ABC limit of comments on a single article - and fair enough - looking at most of the posts it was easy to see it just became a case of tit-for-tat and opinions were just being repeated (nothing new, nothing interesting).But what really stuck out to me was the number of comments in a very short time period. In about 80 minutes there had been 225 comments. That mean that there was nearly 3 comments submitted every minute. And I didn't that that the ABC website had that many users (well not in comparison to Ninemsn for news).It then got me thinking... I wonder what a graph of comments over time would look like. Did it start slow and then build until the cut off? or were people very vocal as soon as the news "went to press" and then did it die off as similar opinions were expressed? I wonder?So I thought I should be able to find out quite easily and then also compare this to different types of sites. Were the projectiles of comment frequency similar on other sites? How would the ABC compare to the BBC? Or Engadget? Or shouts on last.fm? ProofFrom using jquery I know it's a simple task to craw the DOM that makes up a page a collect the data that I needed. I also know that Google provides an easy to use Charts API that would allow me to visually plot the data. Then I needed as easy way to run the code on the 3rd party sites. I didn't want the lock down to one browser (vendor or version) so I choose to make a favelet (or Bookmarklet - call it what you may)And Here it is.Behind the NewsHere's how I built it.</description><link>http://rmwpublishing.net.au/RSSRetrieve.aspx?ID=3903&amp;A=Link&amp;ObjectID=40430&amp;ObjectType=56&amp;O=http%253a%252f%252frmwpublishing.net.au%252flabs-1%252fcomment-analysis</link><guid isPermaLink="true">http://rmwpublishing.net.au/labs-1/comment-analysis</guid><pubDate>Wed, 24 Jun 2015 14:00:00 GMT</pubDate><content:encoded><![CDATA[So I was reading my gMail just the other day (what's new about that? It seems I'm always ready email) and I was distracted by a WebClip (the updates feed by a few selected feeds at the top of my mail plan) about people angry over a Chaser skit I saw the night before. I clicked thought and read about how people had got their feathers ruffled over an "anti-ad" against the <span style="font-style: italic; ;">Make a Wish Foundation</span>. I wasn't surprised, this is something the ABC and Chaser crew must be used to by now. I'm actually starting to think that the organisations complain deliberately and publicly just to get some free PR out of it.<div><br /></div><div>I noticed that the article was only written just after an hour ago. I read the article and then noticed that the comments were already closed. I thought "that's strange". Most comment systems close after a few week or months to stop old debates being rehashed as after a certain period of time it become irrelevant.</div><div><br /></div><div>But within a few hours is abnormal. So I had a look at the number of responses and there was an amazing 225 comments already. This must be the ABC limit of comments on a single article - and fair enough - looking at most of the posts it was easy to see it just became a case of tit-for-tat and opinions were just being repeated (nothing new, nothing interesting).<br /><div><br /></div><div>But what really stuck out to me was the number of comments in a very short time period. In about 80 minutes there had been 225 comments. That mean that there was nearly 3 comments submitted every minute. And I didn't that that the ABC website had that many users (well not in comparison to Ninemsn for news).</div><div><br /></div><div>It then got me thinking... I wonder what a graph of comments over time would look like. Did it start slow and then build until the cut off? or were people very vocal as soon as the news "went to press" and then did it die off as similar opinions were expressed? I wonder?</div><div><br /></div><div>So I thought I should be able to find out quite easily and then also compare this to different types of sites. Were the projectiles of comment frequency similar on other sites? How would the ABC compare to the BBC? Or Engadget? Or shouts on last.fm? </div><div><br /></div><div>Proof</div><div><br /></div><div>From using jquery I know it's a simple task to craw the DOM that makes up a page a collect the data that I needed. I also know that Google provides an easy to use Charts API that would allow me to visually plot the data. Then I needed as easy way to run the code on the 3rd party sites. I didn't want the lock down to one browser (vendor or version) so I choose to make a favelet (or Bookmarklet - call it what you may)</div><div><br /></div><div>And Here it is.</div><div><br /></div><div>Behind the News</div><div><br /></div><div>Here's how I built it.</div></div>]]></content:encoded></item><item><title>Corporate Privacy: ReferralCandy</title><description>Fortunately for us, we have some very proactive clients when it comes to marketing their businesses (one of the reasons why they engaged us in the first place) over the years with us they learn the many different ways to acquire and convert customers. One such way is through affiliate or referral marketing. One of our clients came to us looking to use an online 3rd party referral marketing service for their site: ReferralCandy. Though reading the technical implementation requirements, we discovered some risks that we felt needed explaining before using their services (esp. as the company was not provided by a local company). We're considering your service, but have noticed 2 things of concern:
1) through the use of the tracking code and the BCC on invoice emails, you are privy to a lot of personal data about our end-users and our business (like the total income of the business through online sales) - understandably this is very confidential information.
2) your website lacks any type of terms and conditions or privacy policy to give us the confidence that this very personal information will not be disclosed to third parties or used for any other reasons beyond the management of referral sales and conversions.
What are your polices around the collection, storage and use of this data? Like in real-life, before providing confidential or personal details, you need to trust that the recipient is not going to take you for a ride. This is something that is often overlooked in this growing world of embedded scripts, third-party plugins and open APIs.
This is just one of the reasons why we enjoy continued work from all our great clients - we go beyond just web site development and actually put ourselves in our clients shoes to ensure the best for their business.</description><link>http://rmwpublishing.net.au/RSSRetrieve.aspx?ID=3903&amp;A=Link&amp;ObjectID=501762&amp;ObjectType=56&amp;O=http%253a%252f%252frmwpublishing.net.au%252flabs-1%252fcorporate-privacy-referralcandy</link><guid isPermaLink="true">http://rmwpublishing.net.au/labs-1/corporate-privacy-referralcandy</guid><pubDate>Thu, 28 Aug 2014 05:42:00 GMT</pubDate><content:encoded><![CDATA[<p>Fortunately for us, we have some very proactive clients when it comes to marketing their businesses (one of the reasons why they engaged us in the first place) over the years with us they learn the many different ways to acquire and convert customers. One such way is through affiliate or referral marketing. One of our clients came to us looking to use an online 3rd party referral marketing service for their site: <a href="http://www.referralcandy.com/">ReferralCandy</a>. Though reading the technical implementation requirements, we discovered some risks that we felt needed explaining before using their services (esp. as the company was not provided by a local company).</p>
<blockquote>
<p>We're considering your service, but have noticed 2 things of concern:</p>
<p>1) through the use of the tracking code and the BCC on invoice emails, you are privy to a lot of personal data about our end-users and our business (like the total income of the business through online sales) - understandably this is very confidential information.</p>
<p>2) your website lacks any type of terms and conditions or privacy policy to give us the confidence that this very personal information will not be disclosed to third parties or used for any other reasons beyond the management of referral sales and conversions.</p>
<p>What are your polices around the collection, storage and use of this data?</p>
</blockquote>
<p>Like in real-life, before providing confidential or personal details, you need to trust that the recipient is not going to take you for a ride. This is something that is often overlooked in this growing world of embedded scripts, third-party plugins and open APIs.</p>
<p>This is just one of the reasons why we enjoy continued work from all our great clients - we go beyond just web site development and actually put ourselves in our clients shoes to ensure the best for their business.</p>]]></content:encoded></item><item><title>www.guff</title><description>www.guff
www is wrong wrong wrong, lets clean up the World Wide Web.
Why?
When the web was expanding from its inter-university beginnings and gopher was still in common use, the www sub-domains common use was to signify the world wide website address of a domain. Just like ftp.domain.com is commonly used for File Transfer Protocol, and mail., smtp., and pop. are all common sub-domains used to point requests to a mail server.
Not Needed
These different domains made it easy to route the different requests to a different server, but these days this can easily be handled by a "load balancer" using the different port number each protocol uses to send the packets in the right direction. Often all the services are handled by one machine, further removing the need to setup a unique address for each request type (protocol).
Branding Clutter
Not only is the 'w' character the widest visual space-hogging letter in the English alphabet, it also takes the longest time to verbally pronounce. "double u double u double u dot" adds no value in a time restricted radio advert. Let "dot com", or similar, be the audio cue that a URL was just read out. Over the phone or on the radio get straight to your brand, don't hide it in some techno talk.
When your domain name is your online address why would you prefix it with something irrelevant? Something to distract from having your brand up front? Something that adds 4 extra characters to your limited amount of screen space? Both in the address bar and in the long list of search results, get your brand first in the location string, don't hide your brand between a pseudo protocol and the domain name-space.
When to use
Unfortunately, you will still need to accept requests on the www. sub-domain as people will still instinctively attempt to lookup your website with it. When the server does get a request to this subdomain, redirect it to the non-www version for consistency and to stop search engines indexing both URLs, as technically they are 2 different addresses and search engines treat them as such. It also helps break the human conditioning that four unnecessary keystrokes are not needed to come back to your site. Hopefully, one day soon, www. will be a thing of the past.</description><link>http://rmwpublishing.net.au/RSSRetrieve.aspx?ID=3903&amp;A=Link&amp;ObjectID=49416&amp;ObjectType=56&amp;O=http%253a%252f%252frmwpublishing.net.au%252flabs-1%252fwwwguff</link><guid isPermaLink="true">http://rmwpublishing.net.au/labs-1/wwwguff</guid><pubDate>Thu, 31 Dec 2009 13:00:00 GMT</pubDate><content:encoded><![CDATA[<h2>www.guff</h2>
<p><em>www is wrong wrong wrong, lets clean up the World Wide Web.</em></p>
<h3>Why?</h3>
<p>When the web was expanding from its inter-university beginnings and <a href="http://en.wikipedia.org/wiki/Gopher_(protocol)">gopher</a> was still in common use, the <tt>www</tt> sub-domains common use was to signify the world wide website address of a domain. Just like <tt>ftp.domain.com</tt> is commonly used for <strong>File Transfer Protocol</strong>, and <tt>mail.</tt>, <tt>smtp.</tt>, and <tt>pop.</tt> are all common sub-domains used to point requests to a <strong>mail</strong> server.</p>
<h3>Not Needed</h3>
<p>These different domains made it easy to route the different requests to a different server, but these days this can easily be handled by a "load balancer" using the different port number each protocol uses to send the packets in the right direction. Often all the services are handled by one machine, further removing the need to setup a unique address for each request type (protocol).</p>
<h3>Branding Clutter</h3>
<p>Not only is the '<strong>w</strong>' character the widest visual space-hogging letter in the English alphabet, it also takes the longest time to verbally pronounce. "double u double u double u dot" adds no value in a time restricted radio advert. Let "dot com", or similar, be the audio cue that a <abbr title="Uniform Resource Locator">URL</abbr> was just read out. Over the phone or on the radio get <strong>straight to your brand</strong>, don't hide it in some techno talk.</p>
<p>When your domain name is your online address why would you prefix it with something irrelevant? Something to distract from having your brand up front? Something that adds 4 extra characters to your limited amount of screen space? Both in the address bar and in the long list of search results, get your brand first in the location string, don't hide your brand between a pseudo protocol and the domain name-space.</p>
<h3>When to use</h3>
<p>Unfortunately, you will still need to accept requests on the <tt>www.</tt> sub-domain as people will still instinctively attempt to lookup your website with it. When the server does get a request to this subdomain, <strong>redirect</strong> it to the non-www version for consistency and to stop search engines indexing both URLs, as technically they are 2 different addresses and search engines treat them as such. It also helps break the human conditioning that four unnecessary keystrokes are not needed to come back to your site. Hopefully, one day soon, <tt>www.</tt> will be a thing of the past.</p>
<p class="img"><a href="http://no-www.org/"><img src="http://no-www.org/images/blog-button.gif" alt="no-www.org button" width="80" height="15" /></a></p>]]></content:encoded></item><item><title>Going South: Time-lapsed travel</title><description>In a few weeks we will be meeting up with a fellow geek and long-time friend Ben Duncan in the of the middle of Australia. Ben and I both went to primary and high school together and both created jobs for ourselves in the IT industry. While I develop websites, Ben has built an online email management product. I'm taking the family on a 2 week break, driving from Alice Springs to Darwin. On the other hand, Ben has been travelling down the east coast since January on a journey planned to take him right around the edge of Australia. On the roof of his 4WD he has setup a webcam to take a photo every 5 minutes and upload it to a server on the internet (the server is nick named "the borg"). Read more about his trip and car setup on his blog. THEN: The Website When Ben dropped into Bathurst on his way south, he showed me his setup; roof-top tent, luggage compartments, solar panels, batteries, inverters, NextG modem, wi-fi router, laptop and webcam. All that he had about the trip online was a domain name pointing to his blog post. He hadn't even linked to all the images already uploaded. While Ben caught up on some business emails on my ADSL 2 connection, I grabbed a free designer template, Google My Maps, his company twitter feed and wrote a script to link to the latest image uploaded within less than an hour we had the domain pointed at a mashed-up portal for Ben's Mobile Office adventure around Australia. TODAY: The Production With iTunes pumping out some Sunday morning beats and Skype running (in preparation for Ben to come online and abuse me for hammering his server) Firefox struggled through a quickly-whipped up 3 line jQuery script (only 2 lines really do anything, the other is syntax) to gleam the URL of the images and display them on the same page. $('a').each(function() { $(this).after(''); }); Then I used the Save Web Page, Complete option in Firefox to grab all the rendered files. While that was saving (took about 10 minutes) I was in Apple's drop-and-drag scripting editor Automator creating a workflow to rename all the images downloaded with filenames of consecutive numbers based on their timestamp. For some reason (I guess the nature of being on the road without a constant 'net connection or power supply) the image file names were not consecutive. They were all over the place! The file creation date was the only way to put the files in chronological order. Unfortunately when the files saved to my desktop the original creation date was reset to today's date (and the files don't have an EXIF info), so I had no data to re-order the files. I thought about writing a PHP script to fetch the files and save them directly to my local web server with a file name based it's position in the borg's Apache listing (sorted by date). Or I could just ask Ben for the FTP details and download them directly from the server. I shot him an email requesting the connection details, I even sent him a 30 frame grab of the un-ordered images in a mpg format to entice him that this would be cool. About 10 seconds after the email was sent I decided I was up for the challenge and got started on a PHP script to fetch the files and save them locally. I modified the jQuery (my current JavaScript library of choice, so I use it where I can) to build an array of the filenames, which I then copy and pasted into a PHP script to fetch the files and save them locally. I added a 2 second pause between each fetch to give the server a break. With 4017 image files it took over 2 hrs to get all the images, but now they are neatly ordered. var array = []; $('a').each(function() { array.push( $(this).attr('href') ); }); $('body').prepend('["' + array.join(',') + '"]'); $array = array('050101003114001.jpg', … '050101050728001.jpg'); $count = 0; foreach ($array as $file) { $count++; $remoteFileContents = file_get_contents('http://theborg.calacode.com/carcam/' . $file); $newFileHandle = fopen('/atmailmobileoffice/' . $count . '.jpg', 'w'); fwrite($newFileHandle, $remoteFileContents); fclose($newFileHandle); sleep(2); } RESULT: The Movie From there it was a cinch to use the Quicktime "Import Image sequence" option to build a time-lapsed movie of 3983 frames (removed 34 corrupted images) which at 15 frames a second resulted in a movie just over 4:25 min long. I added some copyleft music, compressed it and uploaded it to Vimeo (which, to me, have better HD compression than other hosted social video services). And here it is: http://vimeo.com/5151475 I like pausing it and scanning each frame/image one at a time when I see something interesting. The Extras Things I noticed from watching the video: Ben likes to camp in view of water The path they are taking is a hit with travellers (lot of time they are driving behind a caravan) the camera get's turned off at night (to save power, plus there is nothing to look at) it rained a lot! (and the camera enclosure gets condensation) the camera is slowly falling to the right (might need to fix that Ben) From flicking through 143 Mb of raw files here are some of the snaps that grabbed my attention: The seagull was caught in mid-flight. Probably not the same seagull, but almost snapped on the same angle. And a dog on a leash. When I first watched the video and only saw this for a split-second it looked like the car was on fire.</description><link>http://rmwpublishing.net.au/RSSRetrieve.aspx?ID=3903&amp;A=Link&amp;ObjectID=40727&amp;ObjectType=56&amp;O=http%253a%252f%252frmwpublishing.net.au%252flabs-1%252fgoing-south-time-lapsed-travel</link><guid isPermaLink="true">http://rmwpublishing.net.au/labs-1/going-south-time-lapsed-travel</guid><pubDate>Sat, 13 Jun 2009 14:00:00 GMT</pubDate><content:encoded><![CDATA[<p>In a few weeks we will be meeting up with a fellow geek and long-time friend Ben Duncan in the of the middle of Australia.</p>
<p>Ben and I both went to primary and high school together and both created jobs for ourselves in the IT industry. While I <a href="http://rmwpublishing.net.au/services">develop websites</a>, Ben has built an <a rel="external" href="http://atmail.com/">online email management product</a>.</p>
<p>I'm taking the family on a 2 week break, driving from Alice Springs to Darwin.</p>
<p>On the other hand, Ben has been travelling down the east coast since January on a journey planned to take him right around the edge of Australia. On the roof of his <a title="Four Wheel Drive">4WD</a> he has setup a webcam to take a photo every 5 minutes and upload it to a server on the internet (the server is nick named "<a rel="external" href="http://theborg.calacode.com/carcam/">the borg</a>"). <a rel="external" href="http://atmail.com/blog/category/mobile-office/">Read more about his trip and car setup on his blog</a>.</p>
<h3>THEN: The Website</h3>
<p>When Ben dropped into Bathurst on his way south, he showed me his setup; roof-top tent, luggage compartments, solar panels, batteries, inverters, NextG modem, wi-fi router, laptop and webcam. All that he had about the trip online was a domain name pointing to his blog post. He hadn't even linked to all the images already uploaded. While Ben caught up on some business emails on my ADSL 2 connection, I grabbed a <a rel="external" href="http://www.opensourcetemplates.org/">free designer template</a>, <a rel="external" href="http://maps.google.com.au/maps/ms?msa=0&msid=115858298138302585857.000464f8306c5ab4a2ca3&ie=UTF8&ll=-29.573457,143.041992&spn=10.691127,24.257813&z=5&source=embed">Google My Maps</a>, <a rel="external" href="http://twitter.com/atmail/">his company twitter</a> feed and wrote <a rel="external" href="http://theborg.calacode.com/carcam/latest.php">a script to link to the latest image uploaded</a> within less than an hour we had the domain pointed at a <a rel="external" href="http://atmailmobileoffice.com/">mashed-up portal for Ben's Mobile Office adventure around Australia</a>.</p>
<h3>TODAY: The Production</h3>
<p>With iTunes pumping out some Sunday morning beats and Skype running (in preparation for Ben to come online and abuse me for hammering his server) <a rel="external" href="http://getfirefox.com/">Firefox</a> struggled through a quickly-whipped up 3 line jQuery script (only 2 lines really do anything, the other is syntax) to gleam the URL of the images and display them on the same page.</p>
<pre><code class="javascript jquery">
$('a').each(function() {
	$(this).after('<img src='http://rmwpublishing.net.au/+ $(this).attr('href') + '>');
});
</code>
</pre>
<p>Then I used the <strong>Save Web Page, Complete</strong> option in Firefox to grab all the rendered files. While that was saving (took about 10 minutes) I was in Apple's drop-and-drag scripting editor Automator creating a workflow to rename all the images downloaded with filenames of consecutive numbers based on their timestamp. For some reason (I guess the nature of being on the road without a constant 'net connection or power supply) the image file names were not consecutive. They were all over the place! The file creation date was the only way to put the files in chronological order.</p>
<p>Unfortunately when the files saved to my desktop the original creation date was reset to today's date (and the files don't have an <acronym title="Exchangeable Image File Format">EXIF</acronym> info), so I had no data to re-order the files. I thought about writing a PHP script to fetch the files and save them directly to my local web server with a file name based it's position in the borg's Apache listing (sorted by date). Or I could just ask Ben for the <acronym title="File Transfer Protocol">FTP</acronym> details and download them directly from the server. I shot him an email requesting the connection details, I even sent him a 30 frame grab of the un-ordered images in a mpg format to entice him that this would be cool. About 10 seconds after the email was sent I decided I was up for the challenge and got started on a <acronym title="Hypertext PreProcessor">PHP</acronym> script to fetch the files and save them locally. I modified the jQuery (my current JavaScript library of choice, so I use it where I can) to build an array of the filenames, which I then copy and pasted into a PHP script to fetch the files and save them locally. I added a 2 second pause between each fetch to give the server a break. With 4017 image files it took <strong>over 2 hrs</strong> to get all the images, but now they are neatly ordered.</p>
<pre><code class="javascript jquery">
var array = [];
$('a').each(function() {
	array.push( $(this).attr('href') );
});
$('body').prepend('["' + array.join(',') + '"]');
</code>
</pre>
<pre><code class="php">
$array = array('050101003114001.jpg', … '050101050728001.jpg');
$count = 0;

foreach ($array as $file) {
	$count++;
	$remoteFileContents = file_get_contents('http://theborg.calacode.com/carcam/' . $file);
	$newFileHandle = fopen('/atmailmobileoffice/' . $count . '.jpg', 'w');
	fwrite($newFileHandle, $remoteFileContents);
	fclose($newFileHandle);
	sleep(2);
}
</code>
</pre>
<h3>RESULT: The Movie</h3>
<p>From there it was a cinch to use the Quicktime "Import Image sequence" option to build a time-lapsed movie of <strong>3983 frames</strong> (removed 34 corrupted images) which at <strong>15 frames a second</strong> resulted in a movie just over <strong>4:25 min</strong> long. I added some <a rel="external" href="http://www.dance-industries.com/view_artist.php?ID=704&track=8864">copyleft music</a>, compressed it and <a rel="external" href="http://vimeo.com/5151475">uploaded it to Vimeo</a> (which, to me, have better HD compression than other hosted social video services).</p>
<p>And here it is:</p>
<div class="media flash">
<object width="640" height="368" data="http://vimeo.com/moogaloop.swf?clip_id=5151475&server=vimeo.com&show_title=1&show_byline=0&show_portrait=0&color=00adef&fullscreen=1" type="application/x-shockwave-flash">
<param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=5151475&server=vimeo.com&show_title=1&show_byline=0&show_portrait=0&color=00adef&fullscreen=1" />
<a href="http://vimeo.com/5151475">http://vimeo.com/5151475</a>
</object>
</div>
<p>I like pausing it and scanning each frame/image one at a time when I see something interesting.</p>
<h4>The Extras</h4>
<p>Things I noticed from watching the video:</p>
<ul>
    <li>Ben likes to camp in view of water</li>
    <li>The path they are taking is a hit with travellers (lot of time they are driving behind a caravan)</li>
    <li>the camera get's turned off at night (to save power, plus there is nothing to look at)</li>
    <li>it rained a lot! (and the camera enclosure gets condensation)</li>
    <li>the camera is slowly falling to the right (might need to fix that Ben)</li>
</ul>
<p>From flicking through <strong>143 Mb</strong> of raw files here are some of the snaps that grabbed my attention:</p>
<div class="fjump">
<ul class="toc">
    <li class="img"><a href="#snap1"><img width="80" height="80" alt="Thumbnail of seagull over car" src="http://rmwpublishing.net.au/blog/files/labs/benvid-snap1-thumb.jpg" /></a></li>
    <li class="img"><a href="#snap2"><img width="80" height="80" alt="Thumbnail of dog on leash" src="http://rmwpublishing.net.au/blog/files/labs/benvid-snap2-thumb.jpg" /></a></li>
    <li class="img"><a href="#snap3"><img width="80" height="80" alt="Thumbnail car with break lights" src="http://rmwpublishing.net.au/blog/files/labs/benvid-snap3-thumb.jpg" /></a></li>
</ul>
<div id="snap1">
<ul class="img">
    <li><img width="300" height="200" alt="Seagull flying over a car" src="http://rmwpublishing.net.au/blog/files/labs/benvid-snap1.jpg" /></li>
</ul>
<p>The seagull was caught in mid-flight.</p>
</div>
<div id="snap2">
<ul class="img">
    <li><img width="300" height="200" alt="Segull flying near the beach" src="http://rmwpublishing.net.au/blog/files/labs/benvid-snap2.jpg" /></li>
</ul>
<p>Probably not the same seagull, but almost snapped on the same angle. And a dog on a leash.</p>
</div>
<div id="snap3">
<ul class="img">
    <li><img width="300" height="200" alt="Car break light reflected in water on lens" src="http://rmwpublishing.net.au/blog/files/labs/benvid-snap3.jpg" /></li>
</ul>
<p>When I first watched the video and only saw this for a split-second it looked like the car was on fire.</p>
</div>
</div>]]></content:encoded></item></channel></rss>