<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Mark Needham</title>
	<atom:link href="http://www.markhneedham.com/blog/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.markhneedham.com/blog</link>
	<description>Thoughts on Software Development</description>
	<lastBuildDate>Sat, 17 Mar 2018 12:41:06 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.9.3</generator>
<site xmlns="com-wordpress:feed-additions:1">4492781</site>	<item>
		<title>Neo4j: Cypher &#8211; Neo.ClientError.Statement.TypeError: Don&#8217;t know how to add Double and String</title>
		<link>http://www.markhneedham.com/blog/2018/03/14/neo4j-cypher-neo-clienterror-statement-typeerror-dont-know-add-double-string/</link>
		<comments>http://www.markhneedham.com/blog/2018/03/14/neo4j-cypher-neo-clienterror-statement-typeerror-dont-know-add-double-string/#respond</comments>
		<pubDate>Wed, 14 Mar 2018 16:53:33 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[neo4j]]></category>
		<category><![CDATA[cypher]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7260</guid>
		<description><![CDATA[<p>I recently upgraded a Neo4j backed application from Neo4j 3.2 to Neo4j 3.3 and came across an interesting change in behaviour around type coercion which led to my application throwing a bunch of errors. In Neo4j 3.2 and earlier if you added a String to a Double it would coerce the Double to a String [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/14/neo4j-cypher-neo-clienterror-statement-typeerror-dont-know-add-double-string/">Neo4j: Cypher &#8211; Neo.ClientError.Statement.TypeError: Don&#8217;t know how to add Double and String</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I recently upgraded a Neo4j backed application from Neo4j 3.2 to Neo4j 3.3 and came across an interesting change in behaviour around type coercion which led to my application throwing a bunch of errors.
</p>
<p>
In Neo4j 3.2 and earlier if you added a String to a Double it would coerce the Double to a String and concatenate the values. The following would therefore be valid Cypher:
</p>
<pre lang="cypher">
RETURN toFloat("1.0") + " Mark"

╒══════════╕
│"result"  │
╞══════════╡
│"1.0 Mark"│
└──────────┘
</pre>
<p>
This behaviour has changed in the 3.3 series and will instead throw an exception:
</p>
<pre lang="cypher">
RETURN toFloat("1.0") + " Mark"

Neo.ClientError.Statement.TypeError: Don't know how to add `Double(1.000000e+00)` and `String(" Mark")`
</pre>
<p>
We can workaround that by forcing our query to run in 3.2 mode:
</p>
<pre lang="cypher">
CYPHER 3.2
RETURN toFloat("1.0") + " Mark" AS result
</pre>
<p>
or we can convert the Double to a String in our Cypher statement:
</p>
<pre lang="cypher">
RETURN toString(toFloat("1.0")) + " Mark" AS result
</pre>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/14/neo4j-cypher-neo-clienterror-statement-typeerror-dont-know-add-double-string/">Neo4j: Cypher &#8211; Neo.ClientError.Statement.TypeError: Don&#8217;t know how to add Double and String</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/03/14/neo4j-cypher-neo-clienterror-statement-typeerror-dont-know-add-double-string/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7260</post-id>	</item>
		<item>
		<title>Yelp: Reverse geocoding businesses to extract detailed location information</title>
		<link>http://www.markhneedham.com/blog/2018/03/14/yelp-reverse-geocoding-businesses-extract-detailed-location-information/</link>
		<comments>http://www.markhneedham.com/blog/2018/03/14/yelp-reverse-geocoding-businesses-extract-detailed-location-information/#respond</comments>
		<pubDate>Wed, 14 Mar 2018 08:53:04 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Python]]></category>
		<category><![CDATA[geocoding]]></category>
		<category><![CDATA[reverse-geocode]]></category>
		<category><![CDATA[yelp]]></category>
		<category><![CDATA[yelp dataset challenge]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7258</guid>
		<description><![CDATA[<p>I&#8217;ve been playing around with the Yelp Open Dataset and wanted to extract more detailed location information for each business. This is an example of the JSON representation of one business: $ cat dataset/business.json &#124; head -n1 &#124; jq { "business_id": "FYWN1wneV18bWNgQjJ2GNg", "name": "Dental by Design", "neighborhood": "", "address": "4855 E Warner Rd, Ste B9", [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/14/yelp-reverse-geocoding-businesses-extract-detailed-location-information/">Yelp: Reverse geocoding businesses to extract detailed location information</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I&#8217;ve been playing around with the <a href="https://www.yelp.co.uk/dataset">Yelp Open Dataset</a> and wanted to extract more detailed location information for each business.
</p>
<p>
This is an example of the JSON representation of one business:
</p>
<pre lang="bash">
$ cat dataset/business.json | head -n1 | jq
{
  "business_id": "FYWN1wneV18bWNgQjJ2GNg",
  "name": "Dental by Design",
  "neighborhood": "",
  "address": "4855 E Warner Rd, Ste B9",
  "city": "Ahwatukee",
  "state": "AZ",
  "postal_code": "85044",
  "latitude": 33.3306902,
  "longitude": -111.9785992,
  "stars": 4,
  "review_count": 22,
  "is_open": 1,
  "attributes": {
    "AcceptsInsurance": true,
    "ByAppointmentOnly": true,
    "BusinessAcceptsCreditCards": true
  },
  "categories": [
    "Dentists",
    "General Dentistry",
    "Health & Medical",
    "Oral Surgeons",
    "Cosmetic Dentists",
    "Orthodontists"
  ],
  "hours": {
    "Friday": "7:30-17:00",
    "Tuesday": "7:30-17:00",
    "Thursday": "7:30-17:00",
    "Wednesday": "7:30-17:00",
    "Monday": "7:30-17:00"
  }
}
</pre>
<p>
The businesses reside in different countries so I wanted to extract the area/county/state and the country for each of them. I found the <a href="https://github.com/thampiman/reverse-geocoder">reverse-geocoder</a> library which is perfect for this problem.
</p>
<p>
You give the library a lat/long or list of lat/longs and it returns you back a list containing the nearest lat/long to your points along with the name of the place, Admin regions, and country code. It&#8217;s way quicker to pass in a list of lat/longs than to call the function individually for each lat/long so we&#8217;ll do that.
</p>
<p>
We can write the following code to extract location information for a list of lat/longs:
</p>
<pre lang="python">
import reverse_geocoder as rg

lat_longs = {
    "FYWN1wneV18bWNgQjJ2GNg": (33.3306902, -111.9785992),
    "He-G7vWjzVUysIKrfNbPUQ": (40.2916853, -80.1048999),
    "KQPW8lFf1y5BT2MxiSZ3QA": (33.5249025, -112.1153098)
}

business_ids = list(lat_longs.keys())
locations = rg.search(list(lat_longs.values()))

for business_id, location in zip(business_ids, locations):
    print(business_id, lat_longs[business_id], location)
</pre>
<p>
This is the output we get from running the script:
</p>
<pre lang="bash">
$ python blog.py 
Loading formatted geocoded file...
FYWN1wneV18bWNgQjJ2GNg (33.3306902, -111.9785992) OrderedDict([('lat', '33.37088'), ('lon', '-111.96292'), ('name', 'Guadalupe'), ('admin1', 'Arizona'), ('admin2', 'Maricopa County'), ('cc', 'US')])
He-G7vWjzVUysIKrfNbPUQ (40.2916853, -80.1048999) OrderedDict([('lat', '40.2909'), ('lon', '-80.10811'), ('name', 'Thompsonville'), ('admin1', 'Pennsylvania'), ('admin2', 'Washington County'), ('cc', 'US')])
KQPW8lFf1y5BT2MxiSZ3QA (33.5249025, -112.1153098) OrderedDict([('lat', '33.53865'), ('lon', '-112.18599'), ('name', 'Glendale'), ('admin1', 'Arizona'), ('admin2', 'Maricopa County'), ('cc', 'US')])
</pre>
<p>
It seems to work fairly well! Now we just need to tweak our script to read in the values from the Yelp JSON file and generate a new JSON file containing the locations:
</p>
<pre lang="python">
import json

import reverse_geocoder as rg

lat_longs = {}

with open("dataset/business.json") as business_json:
    for line in business_json.readlines():
        item = json.loads(line)
        if item["latitude"] and item["longitude"]:
            lat_longs[item["business_id"]] = {
                "lat_long": (item["latitude"], item["longitude"]),
                "city": item["city"]
            }

result = {}

business_ids = list(lat_longs.keys())
locations = rg.search([value["lat_long"] for value in lat_longs.values()])

for business_id, location in zip(business_ids, locations):
    result[business_id] = {
        "country": location["cc"],
        "name": location["name"],
        "admin1": location["admin1"],
        "admin2": location["admin2"],
        "city": lat_longs[business_id]["city"]
    }

with open("dataset/businessLocations.json", "w") as business_locations_json:
    json.dump(result, business_locations_json, indent=4, sort_keys=True)
</pre>
<p>
And that&#8217;s it!</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/14/yelp-reverse-geocoding-businesses-extract-detailed-location-information/">Yelp: Reverse geocoding businesses to extract detailed location information</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/03/14/yelp-reverse-geocoding-businesses-extract-detailed-location-information/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7258</post-id>	</item>
		<item>
		<title>Running asciidoctor-pdf on TeamCity</title>
		<link>http://www.markhneedham.com/blog/2018/03/13/running-asciidoctor-pdf-teamcity/</link>
		<comments>http://www.markhneedham.com/blog/2018/03/13/running-asciidoctor-pdf-teamcity/#respond</comments>
		<pubDate>Tue, 13 Mar 2018 21:57:14 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Software Development]]></category>
		<category><![CDATA[asciidoc]]></category>
		<category><![CDATA[asciidoctor]]></category>
		<category><![CDATA[asciidoctor-pdf]]></category>
		<category><![CDATA[gemfile]]></category>
		<category><![CDATA[Ruby]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7256</guid>
		<description><![CDATA[<p>I&#8217;ve been using asciidoctor-pdf to generate PDF and while I was initially running the tool locally I eventually decided to setup a build on TeamCity. It was a bit trickier than I expected, mostly because I&#8217;m not that familiar with deploying Ruby applications, but I thought I&#8217;d capture what I&#8217;ve done for future me. I [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/13/running-asciidoctor-pdf-teamcity/">Running asciidoctor-pdf on TeamCity</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I&#8217;ve been using <a href="https://asciidoctor.org/docs/asciidoctor-pdf/">asciidoctor-pdf</a> to generate PDF and while I was initially running the tool locally I eventually decided to setup a build on TeamCity.
</p>
<p>
It was a bit trickier than I expected, mostly because I&#8217;m not that familiar with deploying Ruby applications, but I thought I&#8217;d capture what I&#8217;ve done for future me.
</p>
<p>
I have the following <cite>Gemfile</cite> that installs asciidoctor-pdf and its dependencies:
</p>
<p><cite>Gemfile</cite></p>
<pre lang="text">
source 'https://rubygems.org'

gem 'prawn'
gem 'addressable'
gem 'prawn-svg'
gem 'prawn-templates'
gem 'asciidoctor-pdf'
</pre>
<p>
I don&#8217;t have permissions to install gems globally on the build agents so I&#8217;m bundling those up into the <cite>vendor</cite> directory. It&#8217;s been a long time since I worked on a Ruby application so perhaps that&#8217;s par for the course.
</p>
<pre lang="bash">
bundle install --path vendor/
bundle package
</pre>
<p>
On the build agent I&#8217;m running the following script:
</p>
<pre lang="bash">
export PATH="$PATH:/home/teamcity/.gem/ruby/2.3.0/bin"
mkdir $PWD/gems
export GEM_HOME="$PWD/gems"
gem install bundler --user-install --no-rdoc --no-ri && bundle install
./vendor/ruby/2.3.0/bin/asciidoctor-pdf -a allow-uri-read blog.adoc
</pre>
<p>
I override where gems should be installed and then execute the <cite>asciidoctor-pdf</cite> executable from the vendor directory.
</p>
<p>
It all seems to work quite nicely but if there&#8217;s a better approach that I should be taking so let me know in the comments.</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/13/running-asciidoctor-pdf-teamcity/">Running asciidoctor-pdf on TeamCity</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/03/13/running-asciidoctor-pdf-teamcity/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7256</post-id>	</item>
		<item>
		<title>Neo4j Import: java.lang.IllegalStateException: Mixing specified and unspecified group belongings in a single import isn&#8217;t supported</title>
		<link>http://www.markhneedham.com/blog/2018/03/07/neo4j-import-java-lang-illegalstateexception-mixing-specified-unspecified-group-belongings-single-import-isnt-supported/</link>
		<comments>http://www.markhneedham.com/blog/2018/03/07/neo4j-import-java-lang-illegalstateexception-mixing-specified-unspecified-group-belongings-single-import-isnt-supported/#respond</comments>
		<pubDate>Wed, 07 Mar 2018 03:11:12 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[neo4j]]></category>
		<category><![CDATA[bulk import]]></category>
		<category><![CDATA[neo4j-import]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7253</guid>
		<description><![CDATA[<p>I&#8217;ve been working with the Neo4j Import Tool recently after a bit of a break and ran into an interesting error message that I initially didn&#8217;t understand. I had some CSV files containing nodes that I wanted to import into Neo4j. Their contents look like this: $ cat people_header.csv name:ID(Person) $ cat people.csv "Mark" "Michael" [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/07/neo4j-import-java-lang-illegalstateexception-mixing-specified-unspecified-group-belongings-single-import-isnt-supported/">Neo4j Import: java.lang.IllegalStateException: Mixing specified and unspecified group belongings in a single import isn&#8217;t supported</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I&#8217;ve been working with the <a href="https://neo4j.com/docs/operations-manual/current/tools/import/">Neo4j Import Tool</a> recently after a bit of a break and ran into an interesting error message that I initially didn&#8217;t understand.
</p>
<p>
I had some CSV files containing nodes that I wanted to import into Neo4j. Their contents look like this:
</p>
<pre lang="bash">
$ cat people_header.csv 
name:ID(Person)

$ cat people.csv 
"Mark"
"Michael"
"Ryan"
"Will"
"Jennifer"
"Karin"

$ cat companies_header.csv 
name:ID(Company)

$ cat companies.csv 
"Neo4j"
</pre>
<p>
I find it easier to use separate header files because I often make typos with my column names and it&#8217;s easier to update a single line file than to open a multi-million line file and change the first line.
</p>
<p>
I ran the following command to create a new Neo4j database from these files:
</p>
<pre lang="bash">
$ ./bin/neo4j-admin import \
	--database=blog.db \
	--mode=csv \
	--nodes:Person people_header.csv,people.csv \
	--nodes:Company companies_heade.csv,companies.csv
</pre>
<p>
which resulted in this error message:
</p>
<pre lang="bash">
Neo4j version: 3.3.3
Importing the contents of these files into /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/data/databases/blog.db:
Nodes:
  :Person
  /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/people_header.csv
  /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/people.csv

  :Company
  /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/companies.csv

...

Import error: Mixing specified and unspecified group belongings in a single import isn't supported
Caused by:Mixing specified and unspecified group belongings in a single import isn't supported
java.lang.IllegalStateException: Mixing specified and unspecified group belongings in a single import isn't supported
	at org.neo4j.unsafe.impl.batchimport.input.Groups.getOrCreate(Groups.java:52)
	at org.neo4j.unsafe.impl.batchimport.input.csv.InputNodeDeserialization.initialize(InputNodeDeserialization.java:60)
	at org.neo4j.unsafe.impl.batchimport.input.csv.InputEntityDeserializer.initialize(InputEntityDeserializer.java:68)
	at org.neo4j.unsafe.impl.batchimport.input.csv.ParallelInputEntityDeserializer.lambda$new$0(ParallelInputEntityDeserializer.java:104)
	at org.neo4j.unsafe.impl.batchimport.staging.TicketedProcessing.lambda$submit$1(TicketedProcessing.java:103)
	at org.neo4j.unsafe.impl.batchimport.executor.DynamicTaskExecutor$Processor.run(DynamicTaskExecutor.java:237)
</pre>
<p>
The output actually helpfully indicates which files it&#8217;s importing from and we can see under the &#8216;Company&#8217; section that the header file is missing.
</p>
<p>As a result of the typo I made when trying to type <cite>companies_header.csv</cite>, the tool now treats the first line of <cite>companies.csv</cite> as the header and since we haven&#8217;t specified a group (e.g. Company, Person) on that line we receive this error.
</p>
<p>
Let&#8217;s fix the typo and try again:
</p>
<pre lang="bash">
$ ./bin/neo4j-admin import \
	--database=blog.db \
	--mode=csv \
	--nodes:Person people_header.csv,people.csv \
	--nodes:Company companies_header.csv,companies.csv

Neo4j version: 3.3.3
Importing the contents of these files into /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/data/databases/blog.db:
Nodes:
  :Person
  /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/people_header.csv
  /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/people.csv

  :Company
  /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/companies_header.csv
  /Users/markneedham/Library/Application Support/Neo4j Desktop/Application/neo4jDatabases/database-b59e33d5-2060-4a5d-bdb8-0b9f6dc919fa/installation-3.3.3/companies.csv

...

IMPORT DONE in 1s 5ms. 
Imported:
  7 nodes
  0 relationships
  7 properties
Peak memory usage: 480.00 MB
</pre>
<p>
Success!</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/03/07/neo4j-import-java-lang-illegalstateexception-mixing-specified-unspecified-group-belongings-single-import-isnt-supported/">Neo4j Import: java.lang.IllegalStateException: Mixing specified and unspecified group belongings in a single import isn&#8217;t supported</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/03/07/neo4j-import-java-lang-illegalstateexception-mixing-specified-unspecified-group-belongings-single-import-isnt-supported/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7253</post-id>	</item>
		<item>
		<title>Asciidoctor: Creating a macro</title>
		<link>http://www.markhneedham.com/blog/2018/02/19/asciidoctor-creating-macro/</link>
		<comments>http://www.markhneedham.com/blog/2018/02/19/asciidoctor-creating-macro/#respond</comments>
		<pubDate>Mon, 19 Feb 2018 20:51:31 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Software Development]]></category>
		<category><![CDATA[asciidoc]]></category>
		<category><![CDATA[asciidoctor]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7249</guid>
		<description><![CDATA[<p>I&#8217;ve been writing the TWIN4j blog for almost a year now and during that time I&#8217;ve written a few different asciidoc macros to avoid repetition. The most recent one I wrote does the formatting around the Featured Community Member of the Week. I call it like this from the asciidoc, passing in the name of [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/02/19/asciidoctor-creating-macro/">Asciidoctor: Creating a macro</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I&#8217;ve been writing the <a href="https://neo4j.com/tag/twin4j/">TWIN4j blog</a> for almost a year now and during that time I&#8217;ve written a few different <a href="http://asciidoc.org/chunked/ch21.html">asciidoc macros</a> to avoid repetition.
</p>
<p>
The most recent one I wrote does the formatting around the Featured Community Member of the Week. I call it like this from the asciidoc, passing in the name of the person and a link to an image:
</p>
<pre lang="text">
featured::https://s3.amazonaws.com/dev.assets.neo4j.com/wp-content/uploads/20180202004247/this-week-in-neo4j-3-february-2018.jpg[name="Suellen Stringer-Hye"]
</pre>
<p>
The code for the macro has two parts. The first is some wiring code that registers the macro with Asciidoctor:
</p>
<p>
<cite>lib/featured-macro.rb</cite>
</p>
<pre lang="ruby">
RUBY_ENGINE == 'opal' ? (require 'featured-macro/extension') : (require_relative 'featured-macro/extension')

Asciidoctor::Extensions.register do
  if (@document.basebackend? 'html') && (@document.safe < SafeMode::SECURE)
    block_macro FeaturedBlockMacro
  end
end
</pre>
<p>
And this is the code for the macro itself:
</p>
<p>
<cite>lib/featured-macro/extension.rb</cite>
</p>
<pre lang="ruby">
require 'asciidoctor/extensions' unless RUBY_ENGINE == 'opal'

include ::Asciidoctor

class FeaturedBlockMacro < Extensions::BlockMacroProcessor
  use_dsl

  named :featured

  def process parent, target, attrs
    name = attrs["name"]

    html = %(<div class="imageblock image-heading">
                <div class="content">
                    <img src="#{target}" alt="#{name} - This Week’s Featured Community Member" width="800" height="400">
                </div>
            </div>
            <p style="font-size: .8em; line-height: 1.5em;" align="center">
              <strong>#{name} - This Week's Featured Community Member</strong>
            </p>)

    create_pass_block parent, html, attrs, subs: nil
  end
end
</pre>
<p>
When we convert the asciidoc into HTML we need to tell asciidoctor about the macro, which we can do like this:
</p>
<pre lang="bash">
asciidoctor template.adoc \
  -r ./lib/featured-macro.rb \
  -o -
</pre>
<p>
And that's it!</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/02/19/asciidoctor-creating-macro/">Asciidoctor: Creating a macro</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/02/19/asciidoctor-creating-macro/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7249</post-id>	</item>
		<item>
		<title>Tensorflow: Kaggle Spooky Authors Bag of Words Model</title>
		<link>http://www.markhneedham.com/blog/2018/01/29/tensorflow-kaggle-spooky-authors-bag-words-model/</link>
		<comments>http://www.markhneedham.com/blog/2018/01/29/tensorflow-kaggle-spooky-authors-bag-words-model/#respond</comments>
		<pubDate>Mon, 29 Jan 2018 06:51:10 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[kaggle]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[tensorflow]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7245</guid>
		<description><![CDATA[<p>I&#8217;ve been playing around with some Tensorflow tutorials recently and wanted to see if I could create a submission for Kaggle&#8217;s Spooky Author Identification competition that I&#8217;ve written about recently. My model is based on one from the text classification tutorial. The tutorial shows how to create custom Estimators which we can learn more about [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/01/29/tensorflow-kaggle-spooky-authors-bag-words-model/">Tensorflow: Kaggle Spooky Authors Bag of Words Model</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I&#8217;ve been playing around with some Tensorflow tutorials recently and wanted to see if I could create a submission for <a href="https://www.kaggle.com/c/spooky-author-identification">Kaggle&#8217;s Spooky Author Identification competition</a> that I&#8217;ve written about recently.
</p>
<p>
My model is based on one from the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/text_classification.py">text classification tutorial</a>. The tutorial shows how to create custom Estimators which we can learn more about in <a href="https://developers.googleblog.com/2017/12/creating-custom-estimators-in-tensorflow.html">a post on the Google Developers blog</a>.
</p>
<h3>Imports</h3>
<p>
Let&#8217;s get started. First, our imports:
</p>
<pre lang="python">
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
</pre>
<p>
We&#8217;ve obviously got Tensorflow, but also scikit-learn which we&#8217;ll use to split our data into a training and test sets as well as convert the author names into numeric values.
</p>
<h3>Model building functions</h3>
<p>
Next we&#8217;ll create a function to create a bag of words model. This function calls another one that creates different <cite>EstimatorSpec</cite>s depending on the context it&#8217;s called from.
</p>
<pre lang="python">
EMBEDDING_SIZE = 50
MAX_LABEL = 3
WORDS_FEATURE = 'words'  # Name of the input words feature.


def bag_of_words_model(features, labels, mode):
    bow_column = tf.feature_column.categorical_column_with_identity(WORDS_FEATURE, num_buckets=n_words)
    bow_embedding_column = tf.feature_column.embedding_column(bow_column, dimension=EMBEDDING_SIZE)
    bow = tf.feature_column.input_layer(features, feature_columns=[bow_embedding_column])
    logits = tf.layers.dense(bow, MAX_LABEL, activation=None)
    return create_estimator_spec(logits=logits, labels=labels, mode=mode)


def create_estimator_spec(logits, labels, mode):
    predicted_classes = tf.argmax(logits, 1)
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(
            mode=mode,
            predictions={
                'class': predicted_classes,
                'prob': tf.nn.softmax(logits),
                'log_loss': tf.nn.softmax(logits),
            })

    loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
    if mode == tf.estimator.ModeKeys.TRAIN:
        optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
        train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
        return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)

    eval_metric_ops = {
        'accuracy': tf.metrics.accuracy(labels=labels, predictions=predicted_classes)
    }
    return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
</pre>
<h3>Loading data</h3>
<p>
Now we&#8217;re ready to load our data.
</p>
<pre lang="python">
Y_COLUMN = "author"
TEXT_COLUMN = "text"
le = preprocessing.LabelEncoder()

train_df = pd.read_csv("train.csv")
X = pd.Series(train_df[TEXT_COLUMN])
y = le.fit_transform(train_df[Y_COLUMN].copy())
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
</pre>
<p>
The only interesting thing here is the <cite>LabelEncoder</cite>. We&#8217;ll keep that around as we&#8217;ll use it later as well.
</p>
<h3>Transform documents</h3>
<p>
At the moment our training and test dataframes contain text, but Tensorflow works with vectors so we need to convert our data into that format. We can use the <cite><a href="http://tflearn.org/data_utils/#vocabulary-processor">VocabularyProcessor</a></cite> to do this:
</p>
<pre lang="python">
MAX_DOCUMENT_LENGTH = 100
vocab_processor = tf.contrib.learn.preprocessing.VocabularyProcessor(MAX_DOCUMENT_LENGTH)

X_transform_train = vocab_processor.fit_transform(X_train)
X_transform_test = vocab_processor.transform(X_test)

X_train = np.array(list(X_transform_train))
X_test = np.array(list(X_transform_test))

n_words = len(vocab_processor.vocabulary_)
print('Total words: %d' % n_words)
</pre>
<h3>Training our model</h3>
<p>
Finally we&#8217;re ready to train our model! We&#8217;ll call the Bag of Words model we created at the beginning and build a train input function where we pass in the training arrays that we just created:
</p>
<pre lang="python">
model_fn = bag_of_words_model
classifier = tf.estimator.Estimator(model_fn=model_fn)

train_input_fn = tf.estimator.inputs.numpy_input_fn(
    x={WORDS_FEATURE: X_train},
    y=y_train,
    batch_size=len(X_train),
    num_epochs=None,
    shuffle=True)
classifier.train(input_fn=train_input_fn, steps=100)
</pre>
<h3>Evaluating our model</h3>
<p>
Let&#8217;s see how our model fares. We&#8217;ll call the <cite>evaluate</cite> function with our test data:
</p>
<pre lang="python">
test_input_fn = tf.estimator.inputs.numpy_input_fn(
    x={WORDS_FEATURE: X_test},
    y=y_test,
    num_epochs=1,
    shuffle=False)

scores = classifier.evaluate(input_fn=test_input_fn)
print('Accuracy: {0:f}, Loss {1:f}'.format(scores['accuracy'], scores["loss"]))
</pre>
<pre lang="text">
INFO:tensorflow:Saving checkpoints for 1 into /var/folders/k5/ssmkw9vd2yb3h5wnqlxnqbkw0000gn/T/tmpb6v4rrrn/model.ckpt.
INFO:tensorflow:loss = 1.0888131, step = 1
INFO:tensorflow:Saving checkpoints for 100 into /var/folders/k5/ssmkw9vd2yb3h5wnqlxnqbkw0000gn/T/tmpb6v4rrrn/model.ckpt.
INFO:tensorflow:Loss for final step: 0.18394235.
INFO:tensorflow:Starting evaluation at 2018-01-28-22:41:34
INFO:tensorflow:Restoring parameters from /var/folders/k5/ssmkw9vd2yb3h5wnqlxnqbkw0000gn/T/tmpb6v4rrrn/model.ckpt-100
INFO:tensorflow:Finished evaluation at 2018-01-28-22:41:34
INFO:tensorflow:Saving dict for global step 100: accuracy = 0.8246673, global_step = 100, loss = 0.44942895
Accuracy: 0.824667, Loss 0.449429
</pre>
<p>
Not too bad! I managed to get a log loss score of ~ 0.36 with a scikit-learn ensemble model but it is better than some of my first attempts.
</p>
<h3>Generating predictions</h3>
<p>
I wanted to see how it&#8217;d do against Kaggle&#8217;s test dataset so I generated a CSV file with predictions:
</p>
<pre lang="python">
test_df = pd.read_csv("test.csv")

X_test = pd.Series(test_df[TEXT_COLUMN])
X_test = np.array(list(vocab_processor.transform(X_test)))

test_input_fn = tf.estimator.inputs.numpy_input_fn(
    x={WORDS_FEATURE: X_test},
    num_epochs=1,
    shuffle=False)

predictions = classifier.predict(test_input_fn)
y_predicted_classes = np.array(list(p['prob'] for p in predictions))

output = pd.DataFrame(y_predicted_classes, columns=le.classes_)
output["id"] = test_df["id"]
output.to_csv("output.csv", index=False, float_format='%.6f')
</pre>
<p>
Here we go:
</p>
<div>
<img src="http://www.markhneedham.com/blog/wp-content/uploads/2018/01/2018-01-29_06-44-30.png" alt="2018 01 29 06 44 30" title="2018-01-29_06-44-30.png" border="0" width="456" height="64" />
</div>
<p>
The score is roughly the same as we saw with the test split of the training set. If you want to see all the code in one place I&#8217;ve <a href="https://github.com/mneedham/spooky-author-identification/blob/master/tf_test.py">put it on my Spooky Authors GitHub repository</a>.</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/01/29/tensorflow-kaggle-spooky-authors-bag-words-model/">Tensorflow: Kaggle Spooky Authors Bag of Words Model</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/01/29/tensorflow-kaggle-spooky-authors-bag-words-model/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7245</post-id>	</item>
		<item>
		<title>Asciidoc to Asciidoc: Exploding includes</title>
		<link>http://www.markhneedham.com/blog/2018/01/23/asciidoc-asciidoc-exploding-includes/</link>
		<comments>http://www.markhneedham.com/blog/2018/01/23/asciidoc-asciidoc-exploding-includes/#respond</comments>
		<pubDate>Tue, 23 Jan 2018 21:11:49 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Software Development]]></category>
		<category><![CDATA[asciidoc]]></category>
		<category><![CDATA[asciidoctor]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7241</guid>
		<description><![CDATA[<p>One of my favourite features in AsciiDoc is the ability to include other files, but when using lots of includes is that it becomes difficult to read the whole document unless you convert it to one of the supported backends. $ asciidoctor --help Usage: asciidoctor [OPTION]... FILE... Translate the AsciiDoc source FILE or FILE(s) into [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/01/23/asciidoc-asciidoc-exploding-includes/">Asciidoc to Asciidoc: Exploding includes</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
One of my favourite features in <a href="http://asciidoctor.org/docs/asciidoc-syntax-quick-reference/#include-files">AsciiDoc</a> is the ability to include other files, but when using lots of includes is that it becomes difficult to read the whole document unless you convert it to one of the supported backends.
</p>
<pre lang="bash">
$ asciidoctor --help
Usage: asciidoctor [OPTION]... FILE...
Translate the AsciiDoc source FILE or FILE(s) into the backend output format (e.g., HTML 5, DocBook 4.5, etc.)
By default, the output is written to a file with the basename of the source file and the appropriate extension.
Example: asciidoctor -b html5 source.asciidoc

    -b, --backend BACKEND            set output format backend: [html5, xhtml5, docbook5, docbook45, manpage] (default: html5)
                                     additional backends are supported via extensions (e.g., pdf, latex)
</pre>
<p>
I don&#8217;t want to have to convert my code to one of these formats each time &#8211; I want to convert asciidoc to asciidoc!
</p>
<p>
For example, given the following files:
</p>
<p><cite>mydoc.adoc</cite></p>
<pre lang="text">
= My Blog example

== Heading 1

Some awesome text

== Heading 2

include::blog_include.adoc[]
</pre>
<p><cite>blog_include.adoc</cite></p>
<pre lang="text">
Some included text
</pre>
<p>I want to generate another asciidoc file where the contents of the include file are exploded and displayed inline.
</p>
<p>
After a lot of searching I came across <a href="https://github.com/asciidoctor/asciidoctor-extensions-lab/blob/master/scripts/asciidoc-coalescer.rb">an excellent script</a> written by Dan Allen and put it in a file called <cite>adoc.rb</cite>. We can then call it like this:
</p>
<pre lang="bash">
$ ruby adoc.rb mydoc.adoc
= My Blog example

== Heading 1

Some awesome text

== Heading 2

Some included text
</pre>
<p>
Problem solved!
</p>
<p>In my case I actually wanted to explode HTTP includes so I needed to pass the <cite>-a allow-uri-read</cite> flag to the script:
</p>
<pre lang="bash">
$ ruby adoc.rb mydoc.adoc -a allow-uri-read 
</pre>
<p>
And now I can generate asciidoc files until my heart&#8217;s content.</p></p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/01/23/asciidoc-asciidoc-exploding-includes/">Asciidoc to Asciidoc: Exploding includes</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/01/23/asciidoc-asciidoc-exploding-includes/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7241</post-id>	</item>
		<item>
		<title>Strava: Calculating the similarity of two runs</title>
		<link>http://www.markhneedham.com/blog/2018/01/18/strava-calculating-similarity-two-runs/</link>
		<comments>http://www.markhneedham.com/blog/2018/01/18/strava-calculating-similarity-two-runs/#respond</comments>
		<pubDate>Thu, 18 Jan 2018 23:35:25 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Software Development]]></category>
		<category><![CDATA[dtw]]></category>
		<category><![CDATA[Dynamic Time Warping]]></category>
		<category><![CDATA[Google encoded polyline algorithm format]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[running]]></category>
		<category><![CDATA[strava]]></category>
		<category><![CDATA[strava api]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7236</guid>
		<description><![CDATA[<p>I go running several times a week and wanted to compare my runs against each other to see how similar they are. I record my runs with the Strava app and it has an API that returns lat/long coordinates for each run in the Google encoded polyline algorithm format. We can use the polyline library [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/01/18/strava-calculating-similarity-two-runs/">Strava: Calculating the similarity of two runs</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I go running several times a week and wanted to compare my runs against each other to see how similar they are.
</p>
<p>
I record my runs with the <a href="https://www.strava.com/">Strava</a> app and it has an <a href="https://strava.github.io/api">API</a> that returns lat/long coordinates for each run in the <a href="https://strava.github.io/api/#polylines">Google encoded polyline algorithm format</a>.
</p>
<p>
We can use the <a href="https://pypi.python.org/pypi/polyline/">polyline</a> library to decode these values into a list of lat/long tuples. For example:
</p>
<pre lang="python">
import polyline
polyline.decode('u{~vFvyys@fS]')
[(40.63179, -8.65708), (40.62855, -8.65693)]
</pre>
<p>
Once we&#8217;ve got the route defined as a set of coordinates we need to compare them. My Googling led me to an algorithm called <a href="https://en.wikipedia.org/wiki/Dynamic_time_warping">Dynamic Time Warping</a>
</p>
<blockquote><p>
DTW is a method that calculates an optimal match between two given sequences (e.g. time series) with certain restrictions. </p>
<p>The sequences are &#8220;warped&#8221; non-linearly in the time dimension to determine a measure of their similarity independent of certain non-linear variations in the time dimension.
</p></blockquote>
<p>
The <a href="https://pypi.python.org/pypi/fastdtw">fastdtw</a> library implements an approximation of this library and returns a value indicating the distance between sets of points.
</p>
<p>
We can see how to apply fastdtw and polyline against Strava data in the following example:
</p>
<pre lang="python">
import os
import polyline
import requests
from fastdtw import fastdtw

token = os.environ["TOKEN"]
headers = {'Authorization': "Bearer {0}".format(token)}

def find_points(activity_id):
    r = requests.get("https://www.strava.com/api/v3/activities/{0}".format(activity_id), headers=headers)
    response = r.json()
    line = response["map"]["polyline"]
    return polyline.decode(line)
</pre>
<p>
Now let&#8217;s try it out on two runs, <a href="https://www.strava.com/activities/1361109741">1361109741</a> and <a href="https://www.strava.com/activities/1346460542">1346460542</a>:
</p>
<pre lang="python">
from scipy.spatial.distance import euclidean

activity1_id = 1361109741
activity2_id = 1346460542

distance, path = fastdtw(find_points(activity1_id), find_points(activity2_id), dist=euclidean)

>>> print(distance)
2.91985018100644
</pre>
<p>
These two runs are both near my house so the value is small. Let&#8217;s change the second route to be <a href="https://www.strava.com/activities/1246017379">from my trip to New York</a>:
</p>
<pre lang="python">
activity1_id = 1361109741
activity2_id = 1246017379

distance, path = fastdtw(find_points(activity1_id), find_points(activity2_id), dist=euclidean)

>>> print(distance)
29383.492965394034
</pre>
<p>
Much bigger!
</p>
<p>I&#8217;m not really interested in the actual value returned but I am interested in the relative values. I&#8217;m building a little application to generate routes that I should run and I want it to come up with a routes that are different to recent ones that I&#8217;ve run. This score can now form part of the criteria.</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2018/01/18/strava-calculating-similarity-two-runs/">Strava: Calculating the similarity of two runs</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2018/01/18/strava-calculating-similarity-two-runs/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7236</post-id>	</item>
		<item>
		<title>Leaflet: Fit polyline in view</title>
		<link>http://www.markhneedham.com/blog/2017/12/31/leaflet-fit-polyline-view/</link>
		<comments>http://www.markhneedham.com/blog/2017/12/31/leaflet-fit-polyline-view/#respond</comments>
		<pubDate>Sun, 31 Dec 2017 17:35:03 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Javascript]]></category>
		<category><![CDATA[leafletjs]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7234</guid>
		<description><![CDATA[<p>I&#8217;ve been playing with the Leaflet.js library over the Christmas holidays to visualise running routes drawn onto the map using a Polyline and I wanted to zoom the map the right amount to see all the points. Pre requisites We have the following HTML to define the div that will contain the map. We also [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2017/12/31/leaflet-fit-polyline-view/">Leaflet: Fit polyline in view</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I&#8217;ve been playing with the <a href="http://leafletjs.com/reference-1.2.0.html">Leaflet.js</a> library over the Christmas holidays to visualise running routes drawn onto the map using a Polyline and I wanted to zoom the map the right amount to see all the points.
</p>
<h2>Pre requisites</h2>
<p>
We have the following HTML to define the <cite>div</cite> that will contain the map.
</p>
<pre lang="html">
<div id="container">
	<div id="map" style="width: 100%; height: 100%">
	</div>
</div>
</pre>
<p>
We also need to import the following Javascript and CSS files:
</p>
<pre lang="html">
<script src="http://cdn.leafletjs.com/leaflet-0.7/leaflet.js"></script>
  <script type="text/javascript" src="https://rawgit.com/jieter/Leaflet.encoded/master/Polyline.encoded.js"></script>
  <link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet-0.7/leaflet.css"/>

  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/leaflet.draw/0.4.2/leaflet.draw.css"/>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet.draw/0.4.2/leaflet.draw.js"></script>
</pre>
<h2>Polyline representing part of a route</h2>
<p>
The following code creates a polyline for a <a href="https://www.strava.com/segments/15311748">Strava segment</a> that I often run.
</p>
<pre lang="javascript">
var map = L.map('map');
L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {maxZoom: 18,}).addTo(map);

var rawPoints = [
  { "latitude": 51.357874010145395, "longitude": -0.198045110923591 },
  { "latitude": 51.3573858289394, "longitude": -0.19787754933584795 },
  { "latitude": 51.35632791810057, "longitude": -0.19750254941422557 },
  { "latitude": 51.35553240304241, "longitude": -0.197232163894512 },
  { "latitude": 51.35496267279901, "longitude": -0.1970247338143316 },
  { "latitude": 51.35388700570004, "longitude": -0.19666483094752069 },
  { "latitude": 51.3533898352570, "longitude": -0.1964976504847828 },
  { "latitude": 51.35358452733139, "longitude": -0.19512563906602554 },
  { "latitude": 51.354762877995036, "longitude": -0.1945622934585907 },
  { "latitude": 51.355610110109986, "longitude": -0.19468697186046677 },
  { "latitude": 51.35680377680643, "longitude": -0.19395063336295112 },
  { "latitude": 51.356861596801075, "longitude": -0.1936180154828497 },
  { "latitude": 51.358487396611125, "longitude": -0.19349660642888197 }
];

var coordinates = rawPoints.map(rawPoint => new L.LatLng(rawPoint["latitude"], rawPoint["longitude"]))

let polyline = L.polyline(
    coordinates,
    {
        color: 'blue',
        weight: 3,
        opacity: .7,
        lineJoin: 'round'
    }
);

polyline.addTo(map);
</pre>
<p>I wanted to centre the map around the polyline and initially wrote the following code to do this:</p>
<pre lang="javascript">
let lats = rawPoints.map(c => c.latitude).reduce((previous, current) => current += previous, 0.0);
let longs = rawPoints.map(c => c.longitude).reduce((previous, current) => current += previous, 0.0);

const position = [lats / rawPoints.length, longs / rawPoints.length];
map.setView(position, 17);
</pre>
<p>
This works fine but the zoom factor was wrong when I drew longer polylines so I needed a better solution.</p>
<p>I should have <a href="http://leafletjs.com/reference-1.2.0.html#polyline">RTFM</a> because there&#8217;s a much simpler way to do this. I actually found the explanation in <a href="https://github.com/Leaflet/Leaflet/issues/360">a GitHub issue from 2011</a>! We can replace the previous snippet with this single line of code:
</p>
<pre lang="javascript">
map.fitBounds(polyline.getBounds());
</pre>
<p>
And this is how it looks on the screen:
</p>
<div>
<img src="http://www.markhneedham.com/blog/wp-content/uploads/2017/12/2017-12-31_17-30-25.png" alt="2017 12 31 17 30 25" title="2017-12-31_17-30-25.png" border="0" width="213" height="164" />
</div>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2017/12/31/leaflet-fit-polyline-view/">Leaflet: Fit polyline in view</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2017/12/31/leaflet-fit-polyline-view/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7234</post-id>	</item>
		<item>
		<title>Ethereum Hello World Example using solc and web3</title>
		<link>http://www.markhneedham.com/blog/2017/12/28/ethereum-hello-world-example-using-solc-and-web3/</link>
		<comments>http://www.markhneedham.com/blog/2017/12/28/ethereum-hello-world-example-using-solc-and-web3/#respond</comments>
		<pubDate>Thu, 28 Dec 2017 11:03:56 +0000</pubDate>
		<dc:creator><![CDATA[Mark Needham]]></dc:creator>
				<category><![CDATA[Ethereum]]></category>
		<category><![CDATA[blockchain]]></category>
		<category><![CDATA[ethereum]]></category>
		<category><![CDATA[smart-contracts]]></category>

		<guid isPermaLink="false">http://www.markhneedham.com/blog/?p=7231</guid>
		<description><![CDATA[<p>I&#8217;ve been trying to find an Ethereum Hello World example and came across Thomas Conté&#8217;s excellent post that shows how to compile and deploy an Ethereum smart contract with solc and web3. In the latest version of web3 the API has changed to be based on promises so I decided to translate Thomas&#8217; example. Let&#8217;s [&#8230;]</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2017/12/28/ethereum-hello-world-example-using-solc-and-web3/">Ethereum Hello World Example using solc and web3</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>
I&#8217;ve been trying to find an Ethereum Hello World example and came across Thomas Conté&#8217;s excellent post that shows how to <a href="http://hypernephelist.com/2016/12/13/compile-deploy-ethereum-smart-contract-web3-solc.html">compile and deploy an Ethereum smart contract with solc and web3</a>.
</p>
<p>
In the latest version of web3 the API has changed to be based on promises so I decided to translate Thomas&#8217; example.
</p>
<p>Let&#8217;s get started.</p>
<h2>Install npm libraries</h2>
<p>
We need to install these libraries before we start:
</p>
<pre lang="bash">
npm install web3
npm install abi-decoder
npm install ethereumjs-testrpc
</pre>
<p>What do these libraries do?</p>
<ul>
<li>
<cite>web3</cite> is a client library for interacting with an Ethereum blockchain
</li>
<li>
<cite>abi-decoder</cite> is used to decode the hash of a smart contract so that we can work out what was in it.
</li>
<li>
<cite>ethereum-testrpc</cite> lets us spin up a local test version of Ethereum
</li>
</ul>
<h2>Smart contract</h2>
<p>
We&#8217;ll still use the same smart contract as Thomas did. <cite>Token.sol</cite> is a smart contract written in the <a href="https://solidity.readthedocs.io/en/develop/">Solidity</a> language and describes money being transferred between addresses:
</p>
<p><cite>contracts/Token.sol</cite></p>
<pre lang="text">
pragma solidity ^0.4.0;

contract Token {
    mapping (address => uint) public balances;
  
    function Token() {
        balances[msg.sender] = 1000000;
    }

    function transfer(address _to, uint _amount) {
        if (balances[msg.sender] < _amount) {
            throw;
        }

        balances[msg.sender] -= _amount;
        balances[_to] += _amount;
    }
}
</pre>
<p>
Whenever somebody tries to transfer some money we'll put 1,000,000 in their account and then transfer the appropriate amount, assuming there's enough money in the account.
</p>
<h2>Start local Ethereum node</h2>
<p>
Let's start a local Ethereum node. We'll reduce the gas price - the amount you 'pay' to execute a transaction - so we don't run out.
</p>
<pre lang="bash">
$ ./node_modules/.bin/testrpc --gasPrice 20000
EthereumJS TestRPC v6.0.3 (ganache-core: 2.0.2)

Listening on localhost:8545
</pre>
<h2>Pre requisites</h2>
<p>
We need to load a few Node.js modules:
</p>
<pre lang="javascript">
const fs = require("fs"),
      abiDecoder = require('abi-decoder'),
      Web3 = require('web3'),
      solc = require('solc');
</pre>
<h2>Compile smart contract</h2>
<p>
Next we'll compile our smart contract:
</p>
<pre lang="javascript">
const input = fs.readFileSync('contracts/Token.sol');
const output = solc.compile(input.toString(), 1);
const bytecode = output.contracts[':Token'].bytecode;
const abi = JSON.parse(output.contracts[':Token'].interface);
</pre>
<h2>Connect to Ethereum and create contract object</h2>
<p>Now that we've got the <a href="https://github.com/ethereum/wiki/wiki/Ethereum-Contract-ABI">ABI</a> (Application Binary Interface) we'll connect to our local Ethereum node and create a contract object based on the ABI:
</p>
<pre lang="javascript">
let provider = new Web3.providers.HttpProvider("http://localhost:8545");
const web3 = new Web3(provider);
let Voting = new web3.eth.Contract(abi);
</pre>
<h2>Add ABI to decoder</h2>
<p>
Before we interact with the blockchain we'll first add the ABI to our ABI decoder to use later:
</p>
<pre lang="javascript">
abiDecoder.addABI(abi);
</pre>
<h2>Find (dummy) Ethereum accounts</h2>
<p>
Now we're ready to create some transactions! We'll need some Ethereum accounts to play with and if we call <a href="https://web3js.readthedocs.io/en/1.0/web3-eth.html#getaccounts">web3.eth.getAccounts</a> we can get a collection of accounts that the node controls. Since our node is a test one these are all dummy accounts.
</p>
<pre lang="javascript">
web3.eth.getAccounts().then(accounts => {
  accounts.forEach(account => {
    console.log(account)
  })
});
</pre>
<pre lang="text">
0xefeaE7B180c7Af4Dfd23207422071599c7dfd2f7
0x3a54BaAFDe6747531a28491FDD2F36Cb61c83663
0x367e1ac67b9a85E438C7fab7648964E5ed12061e
0xB34ECD20Be6eC99e8e9fAF641A343BAc826FFFf1
0xE65587a2951873efE3325793D5729Ef91b15d5b5
0xdA232aEe954a31179E2F5b40E6efbEa27bB89c87
0x7119fEbab069d440747589b0f1fCDDBAdBDd105d
0xCacB2b61dE0Ca12Fd6FECe230d2f956c8Cdfed34
0x4F33BF93612D1B89C8C8872D4Af30Fa2A9CbfaAf
0xA1Ebc0D19dB41A96B5278720F47C2B6Ab2506ccF
</pre>
<h2>Transfer money between accounts</h2>
<p>Now that we have some accounts let's transfer some money between them.</p>
<pre lang="javascript">
var allAccounts;
web3.eth.getAccounts().then(accounts => {
  allAccounts = accounts;
  Voting.deploy({data: bytecode}).send({
    from: accounts[0],
    gas: 1500000,
    gasPrice: '30000000000000'
  }).on('receipt', receipt => {
    Voting.options.address = receipt.contractAddress;
    Voting.methods.transfer(accounts[1], 10).send({from: accounts[0]}).then(transaction => {
      console.log("Transfer lodged. Transaction ID: " + transaction.transactionHash);
      let blockHash = transaction.blockHash
      return web3.eth.getBlock(blockHash, true);
    }).then(block => {
      block.transactions.forEach(transaction => {
        console.log(abiDecoder.decodeMethod(transaction.input));
      });

      allAccounts.forEach(account => {
          Voting.methods.balances(account).call({from: allAccounts[0]}).then(amount => {
            console.log(account + ": " + amount);
          });
      });
    });
  });
});
</pre>
<p>Let's run in:</p>
<pre lang="text">
Transfer lodged. Transaction ID: 0x699cbe40121d6c2da7b36a107cd5f28b35a71aff2a0d584f8e734b10f4c49de4

{ name: 'transfer',
  params: 
   [ { name: '_to',
       value: '0xeb25dbd0931386eeab267981626ae3908d598404',
       type: 'address' },
     { name: '_amount', value: '10', type: 'uint256' } ] }

0x084181d6fDe8bA802Ee85396aB1d25Ddf1d7D061: 999990
0xEb25dbD0931386eEaB267981626AE3908D598404: 10
0x7deB2487E6Ac40f85fB8f5A3bC6896391bf2570F: 0
0xA15ad4371B62afECE5a7A70457F82A30530630a3: 0
0x64644f3B6B95e81A385c8114DF81663C39084C6a: 0
0xBB68FF2935080c807D5A534b1fc481Aa3fafF1C0: 0
0x38d4A3d635B451Cb006d63ce542950C067D47F58: 0
0x7878bA9138361A08522418BD1c8376Af7220a506: 0
0xf400c0e749Fe02E7073E08d713E0A207dc91FBeb: 0
0x7070d1712a25eb7FCf78A549F17705AA66B0aD47: 0
</pre>
<p>
This code:</p



<ul>
<li>
Deploys our smart contract to the blockchain
</li>
<li>
Transfers £10 from account 1 to account 2
</li>
<li>
Decodes that transaction and shows the output
</li>
<li>
Show the balances of all the dummy accounts
</li>
</ul>
<p>The <a href="https://github.com/mneedham/ethereum-nursery/blob/master/eth_solc.js">full example is available</a> in my <a href="https://github.com/mneedham/ethereum-nursery">ethereum-nursery</a> GitHub repository. Thomas also has <a href="http://hypernephelist.com/2017/01/19/deploy-ethereum-smart-contract-using-client-signature.html">a follow up post</a> that shows how to deploy a contract on a remote node where client side signatures become necessary.</p>
<p>The post <a rel="nofollow" href="http://www.markhneedham.com/blog/2017/12/28/ethereum-hello-world-example-using-solc-and-web3/">Ethereum Hello World Example using solc and web3</a> appeared first on <a rel="nofollow" href="http://www.markhneedham.com/blog">Mark Needham</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.markhneedham.com/blog/2017/12/28/ethereum-hello-world-example-using-solc-and-web3/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
	<post-id xmlns="com-wordpress:feed-additions:1">7231</post-id>	</item>
	</channel>
</rss>
