With the help of this great book: Hello World!: Computer Programming for Kids and Other Beginners
he is starting to realize important concepts such as variable, strings vs numbers, and soon enough we will be going into control flow and loops.
Not very different from how I started, time will tell if it will stick with him, and whether his siblings are going to join the party as well.
Even if none of them will end up with a career in software engineering, programming skills are superbly useful in almost any line of work these day, and probably even more so by they time they would enter the workforce, so I’ll be doing my best to help them be ready, and have tons of fun while doing so.
]]>When sending DateTimes as string across the wire, it is quite useful to use ISO 8601 date formatting. For one, it holds all required info (including timezone offset specified), it is easy to infer the Kind of the DateTime (UTC, Local or Unspecified), it is widely and commonly used across most (if not all) platforms, and if omitting milliseconds, is lexicographically ordered, which makes it useful for indexed storage as well (for example on the filesystem, or as keys in a string based Key Value stores such as Azure Table Storage)
During the many times I had to deal with serialization implementations, while working on one of the many web frameworks I’ve been involved with, or with serialization libraries, I keep getting back to be needing to remember what I did last time, so this post is to serve as a future reminder to self on how I want it to be done.
Did you notice the “o” format specifier? This is a much better than typing “yyyy’-‘MM’-‘dd’T’HH’:’mm’:’ss.fffffffK”, which I’ve been doing until recently.
For lexicographically ordered version, we would only go so far to the seconds, and be sure to force the input datetime kind to UTC (otherwise order is difficult to maintain…):
The ‘K’ specifier will render the following:
If the input datetime is of UTC kind, it will render the letter Z
If the input datetime is of Unspecified kind, it will render … nothing
If the input datetime is of Local kind, it will render + (or –) the offset
The interesting bit here is that there is a difference between 2013-09-03T10:00:00-00:00 and 2013-09-03T10:00:00Z. They refer to the same point in time, however the former refer to the Local time where the offset is 0 (e.g. London, UK at winter time – a lovely picture), while the latter refer to the UTC time. This knowledge allow us to infer the actual datetime kind when parsing the result. How do we do that you may rightfully ask?
That’s it. The trick is in the DateTimeStyles.RoundtripKind bit. I keep forgetting that, and this (and the “o” specifier) is the reason for this post.
When Deserializing ordered DateTimes, the former deserialization code would end up with a DateTime of Unspeficied kind, so it would be better to do that:
]]>(from http://kenegozi.com/blog/2010/10/07/baby-smash-on-big-screen)
And now, Alma (8 months) is doing that:
smaller screen, shorter hair, the rest is quite the same
]]>In order to delete a record, you’d need to have a cell from that record selected, then click on the Delete icon on the Add-Ins ribbon menu. The only requirement is to actually have a value in the first cell of that line.
In order to Insert a record, you’d need to set the fields, and keep the id empty, then click on the Add (plus) icon on the Add-Ins ribbon menu when a cell in that new record’s row is selected.
]]>The management portal of Azure does let you browse your data, but not edit it.
A few days back, Amit showed on his blog a way to create a simple data manager as a Windows 8 Application, using the official SDK.
I however like the UI of Excel for data editing, so I wanted to create a simple editor that taps to Excel mechanisms, and uses the unofficial SDK to communicate with the mobile service.
The results can be seen in the following recording (you’d want to watch it in HD):
How?
First, I created an Excel AddIn project in VS2012. Then I grabbed the latest SDK file from github, and added it to the project. Lastly, I changed the AddIn code to look like that gist (you’d need to set your app url and key), and ran the project.
Current limitations:
There are however two drawbacks that people commonly refer to with regards to using gists that way:
My answer to the first one is simple. I don’t really care. Not that I do not care about SEO, just that I do not need to have my post indexed and tagged under a bunch of irrelevant reserved words and common code. If the snippet is about using an interesting component ABC, I will mention that said ABC in the post content outside of the snippet, problem solved.
The latter is more interesting. I used to manually add a link to the gist page whenever embedding one, but it is not a very fun thing to do.
So, in order to overcome this, I wrote a small code snippet (yey) that upon saving a post (or updating it), will look for gists embeds, grab the gist source from github, stick in into the post as a “ContentForFeed” and serve with a link to the gist page, just for the fun of it.
And the code for it (it’s a hacky c#, but easily translatable to other languages, and/or to a cleaner form)
Have fun reading snippets
]]>Now you just probably say “I wish it would have worked with other client platforms as well as Windows 8”
Guess what? HTTP The service is actually talking to the SDK via HTTP, and the Windows 8 SDK that is published along is a (very rich, awesomely done) wrapper around that HTTP API. Given that, I jumped ahead and implemented a (very poor, awfully done) SDKs for Windows Phone. Disclaimer #1 What you see here in this post and other related ones is 99.999% guaranteed to fail for you. It is a hack job that I put together in a few late-night hours, and it is *not endorsed by the Mobile Services team. It is likely that if and when we do come up with an official WP SDK, it would be looking different. Very different. Even the HTTP api that I’m using here is likely to change by the time the service gets out of Preview mode. codez You can peak at some of the usages for the API in the following gist:
In follow up posts, I will cover the API more, and I will also be adding xml comments to the SDK to make it easier to use. How to get it? Head over to https://github.com/kenegozi/azure-mobile-csharp-sdk.you could either clone the repo, or just navigate to /src/MobileServiceClient.cs , click on the ‘Raw’ button and save it in your project.You’d need to have the latest Newtonsoft’s Json.NET referenced as well (if you don’t have it already).A NuGet based delivery is in the works.
Instead, I downloaded and installed the “Silverlight 5 SDK”, (from here, scroll down a bit)which apparently is not dependent on VS, hence installed correctly and problem is gone.
]]>Instead of going about what a proxy is, I’ll first describe usage scenario or two to make the explanation more concrete.
Consider any “service class” you might have written, lets assume it has a well defined public API – probably using an interface. Now lets say that you want to start logging the amount of time each of these methods of the public API take. A common solution would look like that:
This violates a few engineering principles (repeated code, magic strings, etc.), makes debugging annoying, clutter the view, and overall not-fun
With the ability to replace a method in runtime, a developer in Ruby/Javascript/et-el can easily patch these methods and add the cross cutting concern in run-time.
The concept of AOP is not unfamiliar to c# developers. While some solutions use compile-time code weaving (a-la postsharp) and other techniques, the more common one (which is in use with most IoC containers, as NHiberate and other frameworks) is to use a DynamicProxy. Meaning that in runtime, user code will ask a factory (or IoC) for an object of type X, and will get and object of type Y, where Y is subtype of X, and was dynamically generated in runtime to override X’s public methods, and apply the aspect there. Not unlike any other Wrapper / Decorator class, except for the fact that no-one needs to manually writing code for the wrappers, but instead write the aspect once, and apply it for many types/methods
NHibernate, to allow lazy loading of properties, uses a dynamic proxy when creating instances of objects that were read from the DB, decorating public virtual mapped getters with a “Load the content when first accessed” concern. this is totally transparent to the user. The fact the NH uses (at least by default) runtime dynamic proxies, and that (at least by default) it works with class-based pocos for entities (and not interfaces) is why the docs tell you to use virtual properties if you want Lazy Loading.
And wouldn’t it be nice when writing GUI apps to have PropertyChanged events be wired automatically?
Here is where it is getting even more interesting IMO
The proxying technique can be actually applied to interfaces, not only virtual classes. Meaning that you can actually generate code in runtime to implement certain contracts without having actual implementation of those interfaces in your user code at all!
A fine example of that approach is in Castle’s DictionaryAdapterFactory (see http://docs.castleproject.org/Default.aspx?Page=DictionaryAdapter-Introduction&NS=Tools&AspxAutoDetectCookieSupport=1)In essence, a dynamic proxy is created in runtime to implement a given interface’s properties, allowing typed read/write access to untyped <string,object> datastores (Session, Cache, ViewBag, you name it)
Another example where I used that technique in the past – in a RPC client/server scenario, you need to keep a few things in sync: The server’s endpoints (http in my scenario), the method signatures on both the server and client, and more.The system was using an interface (with a couple of attributes for metadata e.g. URL parts) to declare the servers’ API. The server holds implementations for the interface and in runtime it reflects over the interface to build the endpoints (think MVC routes), while the client generates dynamic proxies from the interfaces that call out to the server in a transparent way. This way we avoided the need to constantly regenerate client proxies (lots of repetitive code and clutter in the codebase, tax on source control and process, and difficult to manipulate and extend), as well as being refactoring-friendly (because it is all code, and magic-strings such as url prefixes etc are defined in exactly one place).
Sorry, running out of time here. I will post an example implementation for a dynamic proxy in c# in a follow-up post.
]]>At some point I moved the app to AppHarbor (which runs in AWS) and I moved the data to MongoLab (which is also on AWS). Both are really great services.
Before it was running on MongoDB, it used to be running on RDBMS (via NHibernate OR/M) and I remember the exercise of translating the Data Access calls from RDMBS to a Document store as fun. Sure, a blog is a very simplistic specimen but even at that level you get to think about modeling approaches (would comments go on separate collection or as subdocuments? how to deal with comment-count? and what about tag clouds? what about pending comments that are suspected to be spam?)
I am now going to repeat that exercise with Azure Storage.
The interesting data API requirements are:
Given the rich featureset of MongoDB, I was able to use secondary indexes, sub-documents, atomic document updates and (for 4, 6 and 8) simple mapReduce calls. The only de-normalization was done with CommentsCount field on post, which is atomically updated every time a comment is added or removed from the post, so the system stayed fully consistent all the time. The queries that required mapReduce (which could get pricy on larger data-sets, and annoying even on small ones) where actually prone to aggressive caching, so no big pain there.
I will be exploring (in upcoming posts, its 2am now and the baby just woke up) what it takes to get the same spec implemented using Azure Storage options – mostly Table Storage and Blog Storage.*
So a while back I set up a system for a customer. They are not a tech company, but rather a more traditional business constructed around “buy stuff for cheap and sell for more”.
The system (which some aspect and the history and evolution of it are material for a few other blob posts) is automating a lot of the pre-processing for incoming buy and sell requests, filtering a real noisy stream of incoming data into relevant pieces of information that is handled to the sales persons quickly, making the business far more productive and competitive than without.
Given the importance, the system needs to be pretty robust. Given the amount of moving parts, it is not a very trivial task.
The backend storage for the system’s internal state (it also coordinates with several other data sources) was MongoDB.
The setup – a single Mongod process, running version 1.8.something (the latest at the time) with journaling on, and all write ops from the client require full ack and flush-to-disk (fsync) to complete. It also is running on a machine that already runs many other things, and is not a very beefy machine to begin with.
Oh yeah, and nobody is watching over it (not a tech company – did I mention that?).
Single instance you say? but sir this is completely and utterly stupid!Sharing the machine you say? but it would eat up all memory and kill everything!No db admin? do IT person who know anything about it? it’s doomed!
In over a year, the system and it only suffered one-time breakdown, which is only attributed to my stupidity – I installed a 32bit version and once the system needed to allocate >2gb file it broke down.
The fix was very simple and super fast – downloaded the 64bit package, replaced the binaries, restarted the service.no data loss, the system picked up jobs from the queue and quickly restored full capacity.
The system have been running for well over a year now, completely unattended, and the only melt-down was avoidable, yet solved quickly and easily. MongoDB proved to be a robust piece of the puzzle. It also is showing a rather small memory footprint (most queries and updates are on the newest data, insertions are usually to the end of collections, so most of the files are kept paged to disk).
So yeah it is not a “web-scale” system in terms of request/sec or data size, but it proved to be a fairly good solution for an internal system that is in charge of tons of money.
Given the design I did for the system (another time, another post), I was not very afraid of possible problems with the data store, knowing that given a problem, once I solve it the system can quickly get back to work. Then I needed a solution that was cheap (low resources, run on existing hardware and OS), flexible to develop with, and with super easy install and upgrade story (xcopy deployment ftw). MongoDB was a perfect fit.
I’ve seen in my consulting years quite a few systems being very fragile, although they were relying on “proven stable” systems such as top-of-the-line RDBMS. Solid architecture and good design are far more crucial to system’s stability than specific tech choices. The question you need to ask yourself when you need to build a complex system (be it on the amount-of-moving parts front, dataset volume, system stress, data sensitivity or a mix of the above), is not “Is tech X stable enough or good enough”, but rather “Do I (or my people) know enough about building complex systems to build a stable one”. If you lack the experience, bring a person in who can help.
]]>It annoyed me, annoyed a few of my readers – some contacted personally, and eventually this happened:
I first suspected that the Updated or Created timestamp fields might be wrong, but looking at both the feed generated by the blog, and the feed as it is being served by feedburner showed me that these fields did not magically change.
I did however find the problem.
My feed is in ATOM 1.0 format, and each entry has an <id> field.
The id I am putting there is the permalink to the post, and here comes the interesting part – I was taking the domain part of the permalink from the current request’s url. I was doing that because I was, how to put it, short sighted.
Anyway as soon as the blog engine moved from my own, fully controlled VM hosted somewhere, to more dynamic environments (AppHarbor at first, now Azure WebRole), behind request routers, load balancers and such, the request that actually got to the blog engine had its domain name changed, and apparently not in a 100% consistent way. The custom cname that was used was changing every now and then (every few or more weeks) and then Google Reader would pick up the changed <id> and even though the title, timestamps and content of the posts remained, the changed <id> made it believe it is a new post.
I now hardcoded the domain part, and all is (hopefully) well.
If not – you can always bash me on facebook :)
]]>Have you noticed the “reserved” option? you could actually scale it to a dedicated VM (or a few), using the exact same simple deployment model.
And of course it’s not only for text files. you could run PHP, node.js, as well as the more expected ASP.NET stack, on top of this.
The Web Sites feature is still in preview mode. To start using Preview Features like Virtual Network and Web Sites, request access on the ‘Preview Features’ page under the ‘account’ tab, after you log into your Windows Azure account. Don’t have an account? Sign-up for a free trial here
]]>I’d like to point out a few of the most awesome ones. If you ever get a chance to work with them or for them – you won’t be wrong to take it. Asana (http://www.asana.com/) Suffice to say that interviewing with them was the single most difficult interview I have ever gone through. And I have been through some hairy interviews in my time. Just browse their team page, full of successful startup veterans, to understand their capacity for execution, and deep understanding of how a web company is to be build on business, tech and team spirit aspects. Bizzabo (http://www.bizzabo.com/) I wish they were around back when I ran IDCC ‘09. The team is super focused, and their product is great. Take a couple of minutes off this page, and go read http://blog.bizzabo.com/5-useful-tips-for-maximizing-your-exhibition. Commerce Sciences (http://www.commercesciences.com/) If having Ron Gross there was not enough, they recently added Oren Ellenbogen to their impressive cast. I had the immense pleasure to work with these guys for quite some time. You’ll be able to learn a ton just by being around them. If you’re not following their tweeters and blogs, go do that right now. And if all that is still not enough, the founders have long, successful history in e-commerce and global-scale web services. E-commerce analysis suddenly sounds super interesting! Gogobot (http://www.gogobot.com/) With an incredible ability to deliver top-quality features in virtually no-time, focus on customers, tons of talent and super fun team spirit, this gang is re-inventing social travel planning. If you’re travelling somewhere without using the service you are missing out. If you are looking for great team to work with in the Bay area – give them a call. Windward (http://www.windward.eu/) This was a refreshing change from all the social-web-mobile-2.0 related companies. these guys are back to basics – solving some actual problems for actual customers with actual money. Forget the long tail – we’re talking big-time clients. They are also dealing with some seriously complex data-crunching, and non-trivial tech challenges. The management crew are extremely professional, experienced and friendly. I spent a truly remarkable month with them, and I’m sure anyone who will be working with them would feel the same. Yotpo (https://b2b.yotpo.com/) I think that Tomer and Omri have one of the best age:matureness ratio in the business. They also appear to be able to crack down the social e-commerce formula into a compelling business model. YouSites (http://yousites.net) A really unique atmosphere. Working from an old villa in the relaxed Rehovot suburb, with home-cocked food and pets running around. Their sunlit garden is one of the best meeting rooms I’ve been to. With a passionate and experienced team, they got a nice thing going there. Keep an eye on them.
I might have forgotten a few others (sorry) – it has been a crazy year after all
Some of these places are hiring. If you are awesome (you probably are if you’re reading my blog) and want an introduction – ping me.
]]>Since then I left my job, met, consulted and worked with a few awesome startups, and finally joined Microsoft and moved with my family to Redmond, WA.
And had a new baby.
So tons of things were going on, lets see if I can capture some thoughts on them:
Life here on the “east-side” are much more relaxed. The amazingscenery, the very low honk/minutes-on-road ratio, switching from an 60 years old tiny apartment to a 20 years old house, cool, drizzly weather vs the hot and moist middle east. We do miss our families very much, but we also have much richer social lives here, with many friends, and plenty of play-dates and outdoor activities for the kids.
I’ve been working with and for startups for many years now. The move to a ~100,000 strong company is a huge change. Half a year in, and I am still struggling to adapt to the big-company way of thinking. There is also a big sense of responsibility knowing that my work now will soon be affecting a serious amount of customers globally, a thing that in many cases in startup world is not entirely true.
Startups are also often times between financially promising, and money-down-the-drain. Microsoft is in business for many decades, and still manages to net tons of money every year, and every year do so more than the last one.
I also need to re-prove myself. When I was employed full-time in the past I was holding top positions such as Architect, Dev manager, and was offered a few VP R&D and CTO jobs. As a busy consulted, I way actually paid to come in and voice my opinions out loud. In corporate land I started much lower, and now need to work very hard to get my voice heard. Especially when I am surrounded with a really talented and experienced bunch of people. I see it as a challenge and as an opportunity to grow and learn. Being a Lion’s tail beats Wolf’s head almost any day of the year. And it is full of Lions around here.
Given W[n]<=>work required for n kids, and F[n]<=>fun gained from n kids, it is sad that
F[n+1] = F[n] * 2, while W[n+1] = W[n] 2
Totally worth it though.
Settling down It has been a heck of a year, with so many things to do that it kept me busy from engaging the OSS and dev community activities as I did in the past. I only gave two short tech presentations (on git and on NoSQL data stores), did very little OSS contributions, and wrote no blog posts for seven months! Now that the whirlwind slowed down, I find myself getting back to these things. I already have tons of things to write about, and a few session proposals to send out to conferences.
As far as this blog goes - the year of changes has just ended, and the year of new and exciting (at least for me) content begins. Stay tuned.
]]>SELECT SERVERPROPERTY ('edition')
I expected to find Developer, but found Express instead.
]]>When creating the database schema during integration tests run, he got “Cannot create table FOO error 105” from MySQL.
There used to be a table named FOO with a VARCHAR primary key. The schema then changed so that the primary key of FOO became BIGINT. There is also a second table in the system (call it BAR) which has a foreign-key into FOO’s primary key. A classic master/details scenario.
However, the table BAR was obsoleted from the schema.
The integration tests runner is dropping all tables and recreating them before running the test suite. It is inferring the schema from the persisted classes using NHibernate’s mapper and the Schema creation feature of NHibernate. Sleeves up We cranked open the mysql console and started to look around:
Id
BIGINT) - fail with error 105.Id
VARCHAR) – success !!Id
BIGINT) - fail with error 105 – againWay can’t MySQL store non-indexed columns in an index?
]]>with two possible usages – a post page: and a homepage:
Let’s define the view model:
PostData:
string Title
string Body
PostView
PostData Post;
HomepageView
PostData[] Posts
LayoutView
Tuple<string, int>[] Archive
Tuple<string, int>[] TagCloud
string[] Similar
The views:
_Layout.cshtml – obvious
Post.cshtml – given a PostData instance will render Title and Body
PostPage.cshtml – given a PostData, will call Post.cshtml and then render “add comment” form
Homepage.cshtml – given PostData array, will iterate and call Post.cshtml for each post
How data moves around:
Controller is passing PostView (or HomepageView) along with LayoutView to the views
Post.cshtml should only see its parameters, not the layout’s (which are passed but are not interesting within the post template).
same goes for the other views
All views should be able to “see” a shared parameter named “IsCurrentUserAdmin”
Given that I want typed access to the view parameters in the view (for the sake of intellisense and refactorings), how would I model and pass the data around?
I’ve pseudo-code-grade written two options: the first is to use inheritence in the view model to achieve type-ness, on the expense of flexibility (composition is difficult with class hierarchy, and you need to be aware of and grab the viewModel instance in various places). The second is flexible (use the ViewData dictionary) but getting type-ness is cumbersome and partial (strings scattered around, casting needed etc.)
see https://gist.github.com/1272269 if the gist widget does not load in-place I do have a solution that works for me With the many years that I’ve been writing complex web apps using various ASP.NET frameworks and almost always with c# based, static-typed view engines, I have a solution that works very nicely for me. But I want to be aware of the MVC3 canonical / textbox way So for all you MVC3 ninja’s out there – please describe your way of doing it.
I will describe my approach in an upcoming post and I’d appreciate any input on it
]]>The packages file now contains AntiXSS, AttributeRouting, Castle.Core (for my good pal DictionaryAdapterFactory), elmah, MarkdownSharp, mongocsharpdriver, XmlRpcMvc and XmlRpcMvc.MetaWeblog (awesome!)
BTW, expect a post on using the DictionaryAdapterFactory to make handling Controller=>View data transport truly awesome.
What’s missing here? IoC !
yeah I did not bother with that now. I have my tiny 15LOC thing and this blog does not need anything of this sort.
Some things might still break. Files I used to host for downloading would probably won’t work now. I will fix that soon I hope, time permitting.
note to self – reshuffle the tags here on the blog. I need to re-tag may entries. Maybe I’ll let site visitors suggest tags?
]]>Brad is 100% correct regarding the way the CLR treat interface attributes, but this does not mean the users should not be able to use validation attributes on model interfaces
So I sat down to extend the model validation to do just that: (see https://gist.github.com/1163635 if it is broken here)
Now I know it is hacky – it should not go on a FilterAttributes() method. If I had access to the sources I’d have added a virtual “GetValidationAttribute” method on DataAnnotationsModelMetadataProvider… (hint hint hint)
]]>