"The Full Spectrum Developer"

This week I was invited to attend a talk by Michael Feathers at the offices of News UK. The topic was “The Full Spectrum Developer”.

Taking inspiration from Laurence Gellert’s post "What is a Full Stack Developer?”, Michael talked about how we should broaden our horizons. Don’t just master one small domain; try to understand a bit about every aspect of your industry, from business needs, to hosting, to user experience. That way we can contribute intelligently and reduce the communication cost between separate teams.

It was a thought-provoking talk, so I wanted to share my notes:

  • There’s a wide span in skills across the industry. What motivates people to go beyond?
  • Curiosity? What does it mean? How do you nurture it?
  • “The Full Stack Developer” understands the whole stack, as well as the business
  • Silos don’t work well due to the communication cost
  • The socio-dynamics of different teams can make things go crazy - “the designers don’t understand about development” - “the developers don’t understand about design”… We have to move away from that
  • Don’t get yourself caught in a silo
  • The Full Spectrum Developer knows about:
    • Server, network, and hosting environment
    • Data modelling
    • Business logic
    • API layer / Action Layer / MVC
    • User Interface
    • User Experience
    • Customer and business needs
  • Know it all the way up and down the chain
  • David A. Thomas is a great example of a full stack developer - knows everything from microprocessor coding to employee retention
  • Bjarne Stroustrup built C++ for himself and became the father of a language
  • Robert Fripp continuously reinvented himself
  • Be distracted sometimes
  • “I can see this barrier. Can I go around it? No, I’ll break through it”
  • "Be the stupidest person in the room" - if you’re going to transition to new things, get used to this
  • Start asking questions early on in the conversation
  • Never underestimate what you know - we all have things to contribute
  • “The people who do well are people who read” - reading is fundamental
  • Cells were the inspiration for Object Oriented Programming
  • In SmallTalk, everything is an object, all the way down
  • Alan Kay, creator of SmallTalk, is great with metaphors - take a metaphor from one part of the world and apply it to another
  • One that didn’t really take off is Lucid, “the dataflow programming language” - uses the metaphor of fluid dynamics - powerful
  • APL is a programming language that uses non-ASCII characters. Funky!
  • J is derived from APL - Quicksort in one line!
  • Challenge yourself - cultivate 3 conceptual interests outside of work
  • E.g. Functional programming - pick something outside of your expertise
  • Push the edges - try it out
  • Different languages - build a repertoire
  • Recommended books:
  • Learn what you need to do your job well, but take distractions - be curious
  • Time-box it - don’t think of it as a chore

Awesome Mobile Animations

Earlier this week I gave a talk at the EdTech Developers Meetup on "Awesome Mobile Animations".

It’s about the kinds of fluid animations that native apps are increasingly using, and the fact that us Web developers should try to up our game and make our animations better too. It gives examples that use CSS3D, Canvas and WebGL. Then it runs through some performance tips.

If that sounds interesting then you might like to check out the…

Blog post: geeking.co/t/awesome-mobile-web-animations/22

Slides: awesome-mobile-animations.herokuapp.com

Source code: github.com/poshaughnessy/edtechdevs-awesome-mobile-animations

Turning hacks into products: Lessons from Let’s Code!

[This is a cross-post of the discussion topic on geeking.co]

Phase 0: The idea

You’ve probably been hearing quite a lot lately about children learning to code. Interest in the topic has exploded recently, with initiatives such as the Hour of Code and new applications and kits designed to help teach programming coming out all the time. This is a post about the process we went through to create our own code-learning web app, Let’s Code! Along the way, we learned that it’s great to hack-start a project, but turning prototypes into products isn’t easy…

The story begins in April 2012. At this time there was an increasing amount of discussion here in the UK about the ICT (Information and Communications Technology) curriculum needing to be modernised. Consensus was growing that we were failing kids by teaching them only how to use software (like Microsoft Office) and not how to create it. The existing curriculum lacked the potential for creativity, it bored students and it put them off studying Computer Science in higher education. The result was an increasingly concerning skills shortage.

This troublesome situation wasn’t lost on our colleagues, who suggested that we - Pearson’s Future Technologies team - might try to do something to help. Since we’re a central R&D team inside the “world’s leading learning company”, it’s our job to prototype new concepts and explore new technology that may affect education. At one of our bi-annual “Future Technologies Champions” meetups, where we come together with our colleagues to generate ideas for, and decide on, our next projects, this idea was voted top. So it was decided: we would create an application to address the IT skills shortage and to help make ICT fun again.

 

Phase 1: Hack-starting (pre-alpha)

We decided to “hack-start” the project, using a hackathon-style format internally to kick things off quickly. We came together with some of our expert colleagues, including an ICT subject advisor, and hacked away together in a room for two days. Over the course of that time, we designed the basics of the app and created the first, quick prototype.

Some of the decisions we made in this short period of time were:

  • To make it as visual as possible and to foster creativity by allowing young people to create their own applications

  • To base it around objects that have properties and can move, as part of real life scenarios that young people can understand and relate to. For example, the long jump in athletics (the London Games were just coming up back then!)


We spent most of the first day figuring out what we were going to do, but by the end of the second day we had hacked away with Easel.js and created a working long jump demo.

image

Although it was basically useless as an actual application at this point, it was really useful as a starting point, to convey what we were hoping to create (both to ourselves, and anyone we spoke to about it).

 

Phase 2: Prototyping (alpha)

Following the hack, we archived that code and started developing again from scratch. (There’s not much harm in throwing away just over a day’s worth of messy code!).

Over the course of the next few weeks, with help from Phil Powell who joined us for a few weeks as a contractor, we built an alpha using Backbone.js. It featured most of the core features for the app. It allowed objects to be added to the stage. You could edit their properties and see the effects. You could make events trigger things (e.g. hooking up a button click to make the athlete start running). It included key programming concepts: objects, properties, methods and events. We also had a couple of tutorials to guide the user through getting started, although you could go off-script and do your own thing too.

We got the site deployed for anyone to try out and made the code public on GitHub. We felt that we had conveyed the concept and we started to get some people excited about it.

It felt like our hardest work had been done.

The main problem was that there was no server side component at all. You couldn’t save your project and if you refreshed the page, you’d go back to the beginning! Also, we hadn’t spent much time on cross-browser testing and it didn’t work in Internet Explorer.

image

image

 

Phase 3: Productizing (beta)

We were all really keen to see our prototype live on and we wanted to learn as much as we could about developing an open source project, to see what lessons we could pass on to other business units in Pearson.

So we decided to spend more time to “productize” it, i.e. add the rest of the features it needed to be a minimum viable product. We’d need to add a server (we chose Node.js) and a database (we chose MongoDB) so we could store users and projects.

We’re used to creating prototypes (23 so far, and counting!) but this was Future Tech’s first experience trying to actually extend one, beyond our usual 8-10 week timeframe.

Naively, I wasn’t expecting it to be too difficult. I thought that we could simply build out the alpha and add more features. I did anticipate that we would need to do some refactoring as we went along, but I didn’t expect it to be that much work.

Starting again

Because we’re a very small team (just two developers) and always busy working on multiple new prototypes once, we had Edward Ruchevits - who had just joined us as a developer intern on a year out from university - take on most of the work on this productizing effort.

Edward is a super-smart and knowledgeable developer, but we really threw him in the deep end on this one! I think that, after being employed as a developer for a bit over a decade now, I had forgotten how difficult it is when you first start to work with other people’s code. Edward was also coding with most of the libraries and technologies (e.g. Backbone.js, Node.js, MongoDB) for the first time. Furthermore, as I was basically full-time on my next project, I wasn’t able to spend enough time with him. So, naturally, Edward didn’t get on well with our fairly messy alpha code. He decided to create a new version from scratch, confident that it would actually be quicker that way, because he would find it easier to work out how things were pieced together.

We were all concerned about starting over again, but we thought that we would be able to pull in code from the old version as we went along. Unfortunately, the codebase quickly diverged. As such, soon we weren’t able to pull in much of the old code at all. 

All this meant that a lot of the effort for the beta went into rewriting features that we already had in the alpha. That was obviously quite frustrating for all of us.

Edward introduced some great improvements though. He adopted Marionette which helped to structure our Backbone code better. He switched to using HTML5 canvas for the stage, instead of DOM elements. And he realised that we didn’t actually need some of the nastiest code from the alpha, and was able to remove it.

But gradually we realised that we’d really underestimated the effort required…

Ramping up

Soon we knew that we’d need some extra help. We asked a London agency called Adaptive Lab to work on it with us for a few weeks.

Adaptive Lab helped to bring a more rigorous approach to development. They were conscientious about writing tests, they conducted code reviews and they were great at mentoring Edward.

A little later we were also joined by ThinkGareth, who dropped right in with great expertise in the technologies we were using, and he helped us all a great deal.

Debugging and wrapping up

Coming to the end of the time we had available, we started concentrating less on adding/restoring features and more on debugging.

At this time, we noticed one particular kind of bug kept rearing its head…

We’re using an Event Aggregator (Backbone.wreqr) to de-couple the various components of the app. Instead of calling other modules directly, you can fire an event which other modules can listen out for and respond to appropriately.

This is great, except we kept running into bugs caused by accidentally leaving old event listeners lying around. It’s easy to do; Marionette Views automatically unbind events you’ve hooked up with listenTo (as opposed to on) when you close the view. But you can’t do this with the Event Aggregator events; you have to remember to turn the listeners off in the view ‘close’ methods.

The problems that this caused were very hard to debug, because it really wasn’t obvious from the fairly random-looking multiple-firing effects, what the actual causes were. There might be something that could help with this now though: Chrome debugging tools now have the ability to inspect asynchronous call stacks.

Eventually, we fixed all the critical bugs that we had identified, pushed out the source code to GitHub, and made the new version live.

 

Lessons learned

So what did we learn from this experience?

Hack-start for the win

Hack-starting your project can be a brilliant way to kick things off. It’s amazing how many important decisions can be made in such as short space of time. Let’s Code! evolved further along the way, but we pretty much had the core concept figured out after just two days. From the direction that gave us, it was relatively easy to then create our 8-10 week ‘alpha’ prototype.

Prototypes != products

But going from prototype to product is harder. Don’t treat this lightly - it should be approached just as carefully as any production build.

Compared to quick hacks and prototypes, doing things properly takes a lot more time. For production apps (especially open source ones, where it’s essential that external developers can get up and running as quickly as possible), you need: 

  • To think more carefully about the architecture

  • To adopt automated testing

  • To spend a lot more time on cross-browser testing and fixes

  • To write decent documentation

  • To use solid development practices, such as conducting code reviews


All of these things can slow down development, but they’re necessary to ensure the quality of the code, and therefore the app.

It’s hard to refactor, even harder to rewrite

Unless it’s just a few days’ worth of messy hack code, or you have a really good reason, prefer refactoring over rewriting (see also Joel Spolsky’s Things You Should Never Do!)


Experienced developers should lead the refactoring in a hands-on way, ensuring that newer developers are properly supported.

Get the infrastructure in place early

One of the things that’s a lot better with the beta is that we have a good infrastructure in place for things like linting, testing, building and deploying. Edward introduced Grunt for this. It was definitely worth putting this in place early - it’s saved us time each occasion we have deployed, and automated linting has picked up many issues along the way. Having a good set of tests in place is also invaluable if you’re doing lots of refactoring.

 

Test with real users early on

We benefitted from getting feedback from real users while development was still ongoing. Arun, our lead designer, surveyed a number of teachers and he visited schools to see it being used first-hand. It was very useful to understand their expectations and their reactions to the app. 

We haven’t been able to address everything at this stage, but as we are able to develop it further, it should help us to focus on what’s most important.

 

Try it out and grab the code

The Let’s Code! beta is now live for anyone to use. At the moment, the beta is still quite limited:

  • You can only create projects via the tutorials

  • There’s only three tutorials, based on a single scenario: the long jump

  • The actual JavaScript code for methods is not yet editable (although you can view it) 

However, we hope that it conveys the concept and you can imagine lots of ways that it could be extended.

It’s an open source project freely available on Github, to enable developers, including code-savvy teachers, to customise the app and add their own features, assets and tutorials. We hope to encourage the developer community to find fresh and creative ways to extend Let’s Code! and help more young people start learning how to code.

Whether you’re a teacher, student, developer, or just someone who’s interested to take a look, we’d love your feedback, so please try it out and let us know what you think!

image

image

April 2014: Highlights from Front End London and State of the Browser 4

Last week I attended Front End London and State of the Browser 4. Here are my highlights (what were yours?):

 

Bridging the gap between developers & designers

[Link to presenation slides and notes]

Kaelig Deloumeau-Prigent from the Guardian gave some insights into how they’re developing the new responsive Guardian website.

The numbers are impressive/scary:

  • About 16K lines of Sass (full compilation takes “a while”!)
  • 55 contributors to the GitHub project (all internal so far), 25 of those working on HTML + CSS
  • About 4 releases per day

So if their designers and developers aren’t communicating efficiently, they have a problem. That’s why they define their whole Design System using Sass variables. Colours, media query breakpoints, the grid system (padding, margins)… they’re all defined with meaningful names that will be used by both the designers and the developers.

They also created their own Sass mixin called sass-mq to help define their media queries in a more elegant way. It allows them to do things like this:

@include mq($from: tablet, $to: desktop) {
  ...
}

   

"Mobile Web is rubbish"

[Link to presentation slides]

Peter Gasston gave an entertaining talk, titled “Over-promised and under-delivered”, about how we need to up our game, because too many Mobile Web experiences are just rubbish!

Only 41% of the top 100 sites have an actual mobile site, and only 6% are significantly optimised in terms of page weight.

image

Some mobile sites that Peter singled out for particular condemnation were:

A site which tells you which way to hold your phone!

image

And a mobile site which tells you to go away “and visit our site on a real screen”!

image

(As Jake Archibald later pointed out, they obviously didn’t clock that this would be extra ironic, given that their desktop site lists “Responsive Design” as one of their specialist skills!)

For more hilarious/exasperating examples of poor mobile web experiences, see WTF Mobile Web and Broken Mobile Web.

It’s no wonder that people are spending more time in native apps and less time in the browser.

Peter then went onto discuss things that we should do, such as:

Use the meta viewport tag to set width=device-width and iOS 7.1’s new minimal-ui, which makes it automatically use ‘full-screen’ mode, i.e. hide the URL bar in portrait, plus also hide status bar and title bar in landscape.

image

But:

  • Don’t set user-scalable=no or maximum-scale=1! That’s an accessibility no-no. Think about people who are partially sighted.
  • A member of the audience said they’d done some user testing and with minimal-ui switched on, users didn’t know how to find the back button, or how to exit their app!
  • Apparently minimal-ui causes problems if you’re also using smart app banners.
  • Also, the viewport meta tag is not a standard and there are efforts to replace it with a @viewport CSS spec (unfortunately although it’s CSS, to ensure it’s picked up as quickly as possible, you are advised to include it inline!).

And hopefully the new “Picture 2.0” <picture> element standard will help with responsive images. There’s a polyfill called Picturefill which looks like it’s up to date with the latest spec. And it should actually ship in browsers in “a few months”.

 

Network: Optional (Service Workers)

[Earlier version of the talk here]

Always entertaining, Jake Archibald gave a great talk (mainly) about Service Workers (the new NavigationController which was the new AppCache!).

There’s a lot to it and it seems like something that will take time for people to fully understand and figure out the potential for.

Peter Gasston earlier described it as “like Node.js that works in your browser”. Apparently it can allow you to do background services that continue working even if someone leaves your site. This could be great for offline capabilities, e.g. checking when a connection comes back up and then syncing back to the server when it has. Also it gives you full control over caching. You get event listeners for requests and you can effectively hijack the response and decide to return something from your cache. And your cache won’t go stale automatically, it’s up to you to remove entries when you want to.

So a usual process will be: deliver your page shell from ServiceWorker, show a spinner, if you want to get fresh content, then show that, otherwise show your cached version, then hide your spinner.

You can also check requests to your APIs and look for a header (e.g. x-use-cache, something you define yourself) and return cached responses if the header is included.

ServiceWorker should move to a public draft and get an implementation in Chrome in the next few weeks (NB. currently there’s a flag to switch on Service Workers, but you can only test registering/unregistering!)

 

Open Web Apps

[Link to presentation slides]

Another great speaker, Christian Heilmann from Mozilla talked about Open Web Apps, i.e “the best of Apps and the best of the Web”. Essentially they are just web apps with the addition of extra Web APIs for device features and a manifest.webapp file used to make it ‘installable’, and useful for app marketplaces such as the Firefox Marketplace.

I didn’t realise but they now have integration with Android as well as Firefox OS.

Cordova now supports Firefox OS and there’s an article about porting existing web apps to Firefox OS here.

 

Update

These are just four talks from the many across the two events that I picked out as my personal highlights, but there were lots of others that you should check out too.

Take a look at www.frontendlondon.co.uk and browser.londonwebstandards.org/schedule/ for the slides/videos of the rest!

 

Arduino with JavaScript (Breakout.js)

On 19th March, I attended an Introduction to Arduino with JavaScript night class.

In 3 hours, Lily Madar guided us to create our first Arduino applications using Breakout.js, a JavaScript Arduino interface for the browser.

First, we just made an LED blink:

image

But soon we were playing with colour-changing LEDs, buttons and potentiometersIt was exciting to be able to create a custom, physical hardware interface for the browser.

For our final project, we had a choice. I chose an HTML5 Canvas Etch-a-Sketch. It was easy to hook up two potentiometers for drawing the line horizontally and vertically. And I included a button for erasing the picture.

image

My (messy) source code is up on GitHub.

The biggest issues I found with Breakout.js were:

  • The interface with the hardware is only live while your tab is open in the browser
  • Most of us had to keep restarting the Breakout server often while we were developing, due to weird errors

So it’s not for real, consumer applications, but it’s a cool prototyping tool and could make for some fun personal/office projects. For example, you could make an LED countdown clock, counting down to your next release. Or a set of build server traffic lights.

All in all, it was a really fun event. A fellow attendee also wrote up a nice blog post about it here.

Pearson Coders - FAQ

At Pearson, with the help of colleagues, I run a company-wide developer community called Coders. This is an FAQ for the benefit of potential external speakers.

Who are Pearson?

You can check out www.pearson.com, but briefly… Pearson is the “world’s leading learning company” with 40,000+ employees. We’re undergoing a massive digital evolution and we have a great potential to make a positive difference to people’s education around the world. Traditionally, people think of Pearson as a textbook publisher, but these days we’re involved in everything ed-tech.

What is Coders?

Coders is a Pearson-wide community for developers to share insights and expertise. As well as hosting ongoing discussions, we stage presentations once a month from internal and external speakers.

The talks are on a variety of topics. We often have Pearson colleagues share their work, or introduce certain technologies. And we often have external speakers share about their developer-oriented products or services.

What is Coders’ audience?

Developers and technical people in Pearson, across the world. We have offices in most countries you could think of. We have clusters of developers in the US, Asia and Europe, especially.

At the time of writing, we have over 350 members. Generally we have about 50-70 join the calls live, with more catching up with the recordings later.

In terms of technical expertise, it varies greatly. We have architects, back-end developers, front-end developers, app developers… Of course those who attend will depend somewhat on the topic.

What time and where?

We’re flexible and arrange them for a mutually suitable time. We generally host them around 4pm/4.30pm/5pm GMT/BST, so it works for the US audience too.

As for the location, they’re generally completely virtual. If the speaker is nearby our office in London or New York though and would like to visit in person, we would be happy to host them. We don’t have a travel budget, though.

Can I use my own web conference system?

So far, we’ve always used our own Pearson WebEx system, because that way we know that attendees will be able to use it without difficulties. If you need to use another system for some reason, let’s discuss and see if it can be possible.

Can I share my talk publicly?

Unless you would prefer us not to, we will record the talk and share it internally, and also share it with you. Except in rare cases that would require discussion, we’d be happy for you to make it public too.

Who/what are you looking for from external speakers?

We’re looking for the talks to be technical and informative, rather than just pure sales pitches. We would want you to be comfortable answering technical questions. So the most suitable presenters would likely be those on the developer relations or technical side, rather than just on the business side (or multiple presenters combining both).

Why don’t you have a website?

We use an internal social platform to post events, discussions, recordings etc. (I’d like to set up an external site too, but in the interim, this blog post will have to suffice!)

What if I have another question?

Please feel free to email me, or tweet me.

Finally… thank you very much to all our speakers, past and future. We really appreciate it.

Hybrid app workflow with Grunt, via XCode

These days, with web apps getting more complex, it’s getting more common to have have an automated JavaScript-based build process - including things like:

  • Running tests and linting
  • CSS compilation (from SASS/Less)
  • Combining and minifying JavaScript files
  • Single-command build and deployment
  • Live re-loading of changes during development


Grunt (like Ant for web apps) enables all of these and a whole lot more. It’s only been on the scene since 2012, but it seems to be exploding in popularity right now.

But how about using Grunt for a hybrid app?

I’ve been reading and talking about hybrid apps for a long time, but I’m actually just developing one for the first time now. Despite being a newbie, I thought it would be worth sharing how we’re setting it up.

NB. Despite the aim being to make the app as cross-platform portable as possible, this post is going to specifically talk about iOS (we’re only targeting one platform for the initial prototype).

The first thing I tried was - of course - PhoneGap. But I was disappointed with the standard workflow. I don’t want to have to run the PhoneGap build first, then load the resulting project in XCode, and then build and run the app from there. That makes the feedback loop for development too slow.

It might have been OK if we could have just tested the app in the web browser most of the time - if we just wanted to wrap a pure web app inside a native wrapper, or bolt on a plugin or two. But we need to develop a significant portion of this app in native code, so we need to be testing the actual native app very regularly. We don’t want to have two separate compilation steps. We need to build it and run it on an iOS device or emulator as quickly as possible.

It was about this time that we stopped looking at PhoneGap and started investigating how much work it would be to just write a UIWebView app with a simple iOS JavaScript bridge. I think we’ll probably go with the latter now, although I’m still wondering about PhoneGap (see below)…

So, what about Grunt? As I mentioned, we don’t want two separate build processes, so we need to combine the Grunt build process with the XCode build process. Thankfully we can do that quite easily with a Build Phase Run Script

A handy StackOverflow post told me this wasn’t too crazy an idea. I soon run into a problem though: the Compass SASS compilation failed. It was just a case of fiddling with the PATH and environment variables though. I’ve written up the solution as a self-answered StackOverflow post:

http://stackoverflow.com/questions/19710361/grunt-compass-task-fails-in-xcode-build-script/

So now our workflow is simply:

  1. Open up both our preferred web IDE (I use WebStorm) and XCode
  2. Edit the web code in the web IDE
  3. Do Build + Run in XCode.

Update

It’s now a few weeks later. Unfortunately we’ve since ditched this XCode-Grunt integration! For the following reasons:

  • We’re sharing our XCode project settings via Git, and we don’t have the same build paths.
  • For some reason it doesn’t update the JavaScript code until you re-build twice! I’m not sure why, but I guess it may be to do with the stage of the build process when the Grunt task takes place - perhaps it happens too late?
  • We’ve split up the work so my colleague is mainly working on the Objective-C side and I’m working mainly on the Web side. So far, my colleague hasn’t needed to update the Web code much, and I haven’t needed to run it up inside the native wrapper much.
  • I’ve realised it’s not actually that hard just to run grunt build separately first ;-)

Oh well… always learning!

Attempting fast 3D graphics for mobile web, without WebGL

Is it possible to create fast 3D interactive graphics for mobile devices, using web technologies? Since WebGL is not yet well supported on mobile devices, what technology should you use? CSS3D? Canvas? Or something else? And is there a high-level graphics library that could help you?

That was the subject of my presentation last night at the HTML5 CodeShow meetup.

image

I’ve been fortunate enough to be able to use Three.js for a couple of desktop web projects recently, and I’ve been very impressed with how easy it makes it to develop WebGL applications.

So when we were tasked with creating a new prototype mobile web application that may benefit from 3D graphics (an app for helping students to revise, called ZamBlocks), I jumped at the chance to try Three.js again. In this case, I wouldn’t be able to use its WebGLRenderer, but it also comes with a CSS3DRenderer and a CanvasRenderer. Both have good support on mobile devices. But then there’s the question of performance…

My presentation runs through the different things I attempted, to try to achieve a good frame rate on various mobile devices. Along the way, I ran into some big hurdles, but I also found a couple of optimisations that helped significantly.

And as it turned out, the final designs for this particular prototype didn’t really require any 3D elements or whizzy animations. (In fact, the whole thing turned out to be very simple. If I was to start from scratch, I’d probably just use DOM elements, or maybe <canvas> directly without a library). But since we’re an R&D team, it’s good for us to try pushing the boundaries and seeing what we can learn along the way. It was a great opportunity to try the other Three.js renderers and explore what’s currently feasible for mobile devices.

As well as Three.js, my presentation also briefly covers Pixi.js, a fairly new 2D graphics engine. A bit like a 2D version of Three.js, it’s built for speed. It will use WebGL if it’s available, but if not fall back to Canvas.

My slides contain lots of embedded examples and videos of how things look on mobile. You can check them out here (arrow keys / swipe to navigate):

http://speedy-web-uis.herokuapp.com

And the code is on GitHub here:

https://github.com/poshaughnessy/speedy-web-uis

The Impossibilities and Possibilities of Google Glass development

In the last few days, Google have released the API and developer documentation for Google Glass.

They also have some videos (such as the SXSW talk, plus these) to guide us through the capabilities.

I thought I’d put together a quick list of the Impossibilities and Possibilities for third party developers (as I see it, from the information so far):

The following are not possible:

'Apps'

You can’t develop ‘apps’ as such, or actually install anything on the device. But you can develop services through timeline cards. These cards can contain small amounts of text, HTML, images, or a map, but there’s no scrolling, JavaScript, or form elements.

Update: This isn’t quite true! It turns out it is possible for techies to install Android APKs - by plugging it in with USB and enabling debug mode, on the Explorer version of the device at least. See this post by Mike DiGiovanni:

https://plus.google.com/116031914637788986927/posts/Abvh8vmvPJk

Realtime picture/video, or voice integration

It’s only possible to tap into user’s images and video if they choose to share it through your service, after they’ve been taken. And it doesn’t seem possible for 3rd party developers to do anything with voice input. “At the moment, there doesn’t appear to be any support for retrieving a camera feed or an audio stream” (source)

Update: Except if you root it, of course! See:

http://arstechnica.com/security/2013/05/rooting-exploit-could-turn-google-glass-into-secret-surveillance-tool/

AR

Early discussions about Google Glass kept referring to it as an AR device. It’s not really AR at all. It doesn’t give you the capability to augment the user’s real-world view, except indirectly, through the small, fixed screen. (It’s actually less of an AR device than a mobile phone held up in front of your face).

Web browsing

"Users don’t browse the web on Glass (well, they can ask questions to Google but there is no API for that yet)" (Max Firtman)

Notifications

"We push, update and delete cards from our server, just for being there if the user thinks it’s time to see the timeline. It’s probable that our card will never be seen by the user… It’s not like a mobile push notification." (Max Firtman)

Eye-tracking

Early unofficial reports said there would be a second camera facing towards you, for eye tracking. From the official tech specs, it seems that’s not the case.

Update: I was right first time - it’s not mentioned in the tech specs (maybe they just don’t want to shout about it much right now?) but there’s definitely an eye tracking camera - that’s what enables ‘Winky’:

http://arstechnica.com/gadgets/2013/05/google-glass-developer-writes-an-app-to-snap-photos-with-just-a-wink/

Location, unless paired with Android 4+ phone

It was popularly reported that Glass would work with phones other than Android. But MyGlass, which includes the GPS and SMS capability, requires Android ICS or higher (source)

Direct revenue

There’s no charging for timeline cards, no payment for virtual goods or upgrades, and no advertising (source)

So what kind of services are feasible?

Services for often-updated content

To provide short snippets of content that the user will often want to have a quick glance at, to see the latest. For example, news headlines.

Update: You can also have short amounts of content read out for the user, using the “read-aloud” feature. See:

http://thenextweb.com/media/2013/04/25/the-new-york-times-releases-a-google-glass-app-that-reads-article-summaries-aloud/

Location services

To provide advice/information about nearby locations. For example, travel information or tourist destination tips.

Share services

For sharing your photos and video with your friends. Or sharing them with services (automated or not) that can do something with them and send you something back.

Simple communication / social networking

It’s possible not just to consume 3rd party content, but to reply with text or respond with selections. So reading and creating emails, text messages, Facebook status updates, tweets…  should all be possible.

To summarise…

The possibilities for third party developers are more limited than many hoped. But, there’s still an exciting amount to explore. And remember this is the very first API for the very first commercial device of its kind. (Compare it to the first version of the iPhone, which didn’t have an SDK or an App Store).

To quote Timothy Jordan, "It’s early days… We’re really just getting started".

NodeCopter: geeking out with flying robots

What happens when you combine a room full of geeks with a bunch of programmable flying robots?

That’s what I found out when I attended NodeCopter last weekend, a “small, full day event where teams of 3 or 4 get together to hack on flying robots using JavaScript”.

A sell-out at Forward’s offices in London, I was fortunate to see a tweet from organiser Andrew Nesbitt just in time to snap up a free ticket before the last one went.

Photo by Andrew Nesbitt

A number of companies had each sponsored a Parrot AR Drone 2 (they cost about $300 each).

It was great fun and amazing to see all the very different - but equally cool - hacks that the different teams came up with.

My favourites were:

Making the drone bounce up and down in time to music beats, using dancer.js, a JavaScript audio library.

Photo by Andrew Nesbitt

Controlling the drone using QR codes. They got the drone to hover in the air and walked up to it with a QR code on their phone or printed on paper. The code is recognised through the drone’s in-built video camera, instructing it to do various things such as ‘dancing’ in the air.

Photo by Andrew Nesbitt

Controlling the drone by pressing buttons drawn with ink on a piece of A4 paper. They used special conductive ink hooked up to Arduino.

Photo by Andrew Nesbitt

Controlling the drone with a Playstation controller, using the HTML5 Gamepad API.

As for me, I teamed up with Markus Kobler and Matt Copperwaite and created a Leap Motion hack.

Markus, me and Matt - photo by Andrew Nesbitt

For those who haven’t heard of it yet, the Leap Motion is a very small and accurate 3D gestural input device. In other words, you can simply wave your hands or fingers in the air to control things through your computer.

We programmed it so that moving your hand controls the movement of the drone in 3 dimensions. A simple ‘tap’ gesture in the air makes the drone land back down. And the best bit: a ‘circle’ gesture makes it do a barrel roll!

Here’s our demo from the end of the event:

 

Nodecopter London from Markus Kobler on Vimeo.

All in all, it was surely the geekiest event I’ve ever attended, but also one of the most memorable!

NodeCopter events continue to be staged across the globe, so if you’d like to attend one yourself, be sure to keep an eye on the Upcoming Events page.