Arduino with JavaScript (Breakout.js)

On 19th March, I attended an Introduction to Arduino with JavaScript night class.

In 3 hours, Lily Madar guided us to create our first Arduino applications using Breakout.js, a JavaScript Arduino interface for the browser.

First, we just made an LED blink:

image

But soon we were playing with colour-changing LEDs, buttons and potentiometersIt was exciting to be able to create a custom, physical hardware interface for the browser.

For our final project, we had a choice. I chose an HTML5 Canvas Etch-a-Sketch. It was easy to hook up two potentiometers for drawing the line horizontally and vertically. And I included a button for erasing the picture.

image

My (messy) source code is up on GitHub.

The biggest issues I found with Breakout.js were:

  • The interface with the hardware is only live while your tab is open in the browser
  • Most of us had to keep restarting the Breakout server often while we were developing, due to weird errors

So it’s not for real, consumer applications, but it’s a cool prototyping tool and could make for some fun personal/office projects. For example, you could make an LED countdown clock, counting down to your next release. Or a set of build server traffic lights.

All in all, it was a really fun event. A fellow attendee also wrote up a nice blog post about it here.

Pearson Coders - FAQ

At Pearson, with the help of colleagues, I run a company-wide developer community called Coders. This is an FAQ for the benefit of potential external speakers.

Who are Pearson?

You can check out www.pearson.com, but briefly… Pearson is the “world’s leading learning company” with 40,000+ employees. We’re undergoing a massive digital evolution and we have a great potential to make a positive difference to people’s education around the world. Traditionally, people think of Pearson as a textbook publisher, but these days we’re involved in everything ed-tech.

What is Coders?

Coders is a Pearson-wide community for developers to share insights and expertise. As well as hosting ongoing discussions, we stage presentations once a month from internal and external speakers.

The talks are on a variety of topics. We often have Pearson colleagues share their work, or introduce certain technologies. And we often have external speakers share about their developer-oriented products or services.

What is Coders’ audience?

Developers and technical people in Pearson, across the world. We have offices in most countries you could think of. We have clusters of developers in the US, Asia and Europe, especially.

At the time of writing, we have over 350 members. Generally we have about 50-70 join the calls live, with more catching up with the recordings later.

In terms of technical expertise, it varies greatly. We have architects, back-end developers, front-end developers, app developers… Of course those who attend will depend somewhat on the topic.

What time and where?

We’re flexible and arrange them for a mutually suitable time. We generally host them around 4pm/4.30pm/5pm GMT/BST, so it works for the US audience too.

As for the location, they’re generally completely virtual. If the speaker is nearby our office in London or New York though and would like to visit in person, we would be happy to host them. We don’t have a travel budget, though.

Can I use my own web conference system?

So far, we’ve always used our own Pearson WebEx system, because that way we know that attendees will be able to use it without difficulties. If you need to use another system for some reason, let’s discuss and see if it can be possible.

Can I share my talk publicly?

Unless you would prefer us not to, we will record the talk and share it internally, and also share it with you. Except in rare cases that would require discussion, we’d be happy for you to make it public too.

Who/what are you looking for from external speakers?

We’re looking for the talks to be technical and informative, rather than just pure sales pitches. We would want you to be comfortable answering technical questions. So the most suitable presenters would likely be those on the developer relations or technical side, rather than just on the business side (or multiple presenters combining both).

Why don’t you have a website?

We use an internal social platform to post events, discussions, recordings etc. (I’d like to set up an external site too, but in the interim, this blog post will have to suffice!)

What if I have another question?

Please feel free to email me, or tweet me.

Finally… thank you very much to all our speakers, past and future. We really appreciate it.

Hybrid app workflow with Grunt, via XCode

These days, with web apps getting more complex, it’s getting more common to have have an automated JavaScript-based build process - including things like:

  • Running tests and linting
  • CSS compilation (from SASS/Less)
  • Combining and minifying JavaScript files
  • Single-command build and deployment
  • Live re-loading of changes during development


Grunt (like Ant for web apps) enables all of these and a whole lot more. It’s only been on the scene since 2012, but it seems to be exploding in popularity right now.

But how about using Grunt for a hybrid app?

I’ve been reading and talking about hybrid apps for a long time, but I’m actually just developing one for the first time now. Despite being a newbie, I thought it would be worth sharing how we’re setting it up.

NB. Despite the aim being to make the app as cross-platform portable as possible, this post is going to specifically talk about iOS (we’re only targeting one platform for the initial prototype).

The first thing I tried was - of course - PhoneGap. But I was disappointed with the standard workflow. I don’t want to have to run the PhoneGap build first, then load the resulting project in XCode, and then build and run the app from there. That makes the feedback loop for development too slow.

It might have been OK if we could have just tested the app in the web browser most of the time - if we just wanted to wrap a pure web app inside a native wrapper, or bolt on a plugin or two. But we need to develop a significant portion of this app in native code, so we need to be testing the actual native app very regularly. We don’t want to have two separate compilation steps. We need to build it and run it on an iOS device or emulator as quickly as possible.

It was about this time that we stopped looking at PhoneGap and started investigating how much work it would be to just write a UIWebView app with a simple iOS JavaScript bridge. I think we’ll probably go with the latter now, although I’m still wondering about PhoneGap (see below)…

So, what about Grunt? As I mentioned, we don’t want two separate build processes, so we need to combine the Grunt build process with the XCode build process. Thankfully we can do that quite easily with a Build Phase Run Script

A handy StackOverflow post told me this wasn’t too crazy an idea. I soon run into a problem though: the Compass SASS compilation failed. It was just a case of fiddling with the PATH and environment variables though. I’ve written up the solution as a self-answered StackOverflow post:

http://stackoverflow.com/questions/19710361/grunt-compass-task-fails-in-xcode-build-script/

So now our workflow is simply:

  1. Open up both our preferred web IDE (I use WebStorm) and XCode
  2. Edit the web code in the web IDE
  3. Do Build + Run in XCode.

Update

It’s now a few weeks later. Unfortunately we’ve since ditched this XCode-Grunt integration! For the following reasons:

  • We’re sharing our XCode project settings via Git, and we don’t have the same build paths.
  • For some reason it doesn’t update the JavaScript code until you re-build twice! I’m not sure why, but I guess it may be to do with the stage of the build process when the Grunt task takes place - perhaps it happens too late?
  • We’ve split up the work so my colleague is mainly working on the Objective-C side and I’m working mainly on the Web side. So far, my colleague hasn’t needed to update the Web code much, and I haven’t needed to run it up inside the native wrapper much.
  • I’ve realised it’s not actually that hard just to run grunt build separately first ;-)

Oh well… always learning!

Attempting fast 3D graphics for mobile web, without WebGL

Is it possible to create fast 3D interactive graphics for mobile devices, using web technologies? Since WebGL is not yet well supported on mobile devices, what technology should you use? CSS3D? Canvas? Or something else? And is there a high-level graphics library that could help you?

That was the subject of my presentation last night at the HTML5 CodeShow meetup.

image

I’ve been fortunate enough to be able to use Three.js for a couple of desktop web projects recently, and I’ve been very impressed with how easy it makes it to develop WebGL applications.

So when we were tasked with creating a new prototype mobile web application that may benefit from 3D graphics (an app for helping students to revise, called ZamBlocks), I jumped at the chance to try Three.js again. In this case, I wouldn’t be able to use its WebGLRenderer, but it also comes with a CSS3DRenderer and a CanvasRenderer. Both have good support on mobile devices. But then there’s the question of performance…

My presentation runs through the different things I attempted, to try to achieve a good frame rate on various mobile devices. Along the way, I ran into some big hurdles, but I also found a couple of optimisations that helped significantly.

And as it turned out, the final designs for this particular prototype didn’t really require any 3D elements or whizzy animations. (In fact, the whole thing turned out to be very simple. If I was to start from scratch, I’d probably just use DOM elements, or maybe <canvas> directly without a library). But since we’re an R&D team, it’s good for us to try pushing the boundaries and seeing what we can learn along the way. It was a great opportunity to try the other Three.js renderers and explore what’s currently feasible for mobile devices.

As well as Three.js, my presentation also briefly covers Pixi.js, a fairly new 2D graphics engine. A bit like a 2D version of Three.js, it’s built for speed. It will use WebGL if it’s available, but if not fall back to Canvas.

My slides contain lots of embedded examples and videos of how things look on mobile. You can check them out here (arrow keys / swipe to navigate):

http://speedy-web-uis.herokuapp.com

And the code is on GitHub here:

https://github.com/poshaughnessy/speedy-web-uis

The Impossibilities and Possibilities of Google Glass development

In the last few days, Google have released the API and developer documentation for Google Glass.

They also have some videos (such as the SXSW talk, plus these) to guide us through the capabilities.

I thought I’d put together a quick list of the Impossibilities and Possibilities for third party developers (as I see it, from the information so far):

The following are not possible:

'Apps'

You can’t develop ‘apps’ as such, or actually install anything on the device. But you can develop services through timeline cards. These cards can contain small amounts of text, HTML, images, or a map, but there’s no scrolling, JavaScript, or form elements.

Update: This isn’t quite true! It turns out it is possible for techies to install Android APKs - by plugging it in with USB and enabling debug mode, on the Explorer version of the device at least. See this post by Mike DiGiovanni:

https://plus.google.com/116031914637788986927/posts/Abvh8vmvPJk

Realtime picture/video, or voice integration

It’s only possible to tap into user’s images and video if they choose to share it through your service, after they’ve been taken. And it doesn’t seem possible for 3rd party developers to do anything with voice input. “At the moment, there doesn’t appear to be any support for retrieving a camera feed or an audio stream” (source)

Update: Except if you root it, of course! See:

http://arstechnica.com/security/2013/05/rooting-exploit-could-turn-google-glass-into-secret-surveillance-tool/

AR

Early discussions about Google Glass kept referring to it as an AR device. It’s not really AR at all. It doesn’t give you the capability to augment the user’s real-world view, except indirectly, through the small, fixed screen. (It’s actually less of an AR device than a mobile phone held up in front of your face).

Web browsing

"Users don’t browse the web on Glass (well, they can ask questions to Google but there is no API for that yet)" (Max Firtman)

Notifications

"We push, update and delete cards from our server, just for being there if the user thinks it’s time to see the timeline. It’s probable that our card will never be seen by the user… It’s not like a mobile push notification." (Max Firtman)

Eye-tracking

Early unofficial reports said there would be a second camera facing towards you, for eye tracking. From the official tech specs, it seems that’s not the case.

Update: I was right first time - it’s not mentioned in the tech specs (maybe they just don’t want to shout about it much right now?) but there’s definitely an eye tracking camera - that’s what enables ‘Winky’:

http://arstechnica.com/gadgets/2013/05/google-glass-developer-writes-an-app-to-snap-photos-with-just-a-wink/

Location, unless paired with Android 4+ phone

It was popularly reported that Glass would work with phones other than Android. But MyGlass, which includes the GPS and SMS capability, requires Android ICS or higher (source)

Direct revenue

There’s no charging for timeline cards, no payment for virtual goods or upgrades, and no advertising (source)

So what kind of services are feasible?

Services for often-updated content

To provide short snippets of content that the user will often want to have a quick glance at, to see the latest. For example, news headlines.

Update: You can also have short amounts of content read out for the user, using the “read-aloud” feature. See:

http://thenextweb.com/media/2013/04/25/the-new-york-times-releases-a-google-glass-app-that-reads-article-summaries-aloud/

Location services

To provide advice/information about nearby locations. For example, travel information or tourist destination tips.

Share services

For sharing your photos and video with your friends. Or sharing them with services (automated or not) that can do something with them and send you something back.

Simple communication / social networking

It’s possible not just to consume 3rd party content, but to reply with text or respond with selections. So reading and creating emails, text messages, Facebook status updates, tweets…  should all be possible.

To summarise…

The possibilities for third party developers are more limited than many hoped. But, there’s still an exciting amount to explore. And remember this is the very first API for the very first commercial device of its kind. (Compare it to the first version of the iPhone, which didn’t have an SDK or an App Store).

To quote Timothy Jordan, "It’s early days… We’re really just getting started".

NodeCopter: geeking out with flying robots

What happens when you combine a room full of geeks with a bunch of programmable flying robots?

That’s what I found out when I attended NodeCopter last weekend, a “small, full day event where teams of 3 or 4 get together to hack on flying robots using JavaScript”.

A sell-out at Forward’s offices in London, I was fortunate to see a tweet from organiser Andrew Nesbitt just in time to snap up a free ticket before the last one went.

Photo by Andrew Nesbitt

A number of companies had each sponsored a Parrot AR Drone 2 (they cost about $300 each).

It was great fun and amazing to see all the very different - but equally cool - hacks that the different teams came up with.

My favourites were:

Making the drone bounce up and down in time to music beats, using dancer.js, a JavaScript audio library.

Photo by Andrew Nesbitt

Controlling the drone using QR codes. They got the drone to hover in the air and walked up to it with a QR code on their phone or printed on paper. The code is recognised through the drone’s in-built video camera, instructing it to do various things such as ‘dancing’ in the air.

Photo by Andrew Nesbitt

Controlling the drone by pressing buttons drawn with ink on a piece of A4 paper. They used special conductive ink hooked up to Arduino.

Photo by Andrew Nesbitt

Controlling the drone with a Playstation controller, using the HTML5 Gamepad API.

As for me, I teamed up with Markus Kobler and Matt Copperwaite and created a Leap Motion hack.

Markus, me and Matt - photo by Andrew Nesbitt

For those who haven’t heard of it yet, the Leap Motion is a very small and accurate 3D gestural input device. In other words, you can simply wave your hands or fingers in the air to control things through your computer.

We programmed it so that moving your hand controls the movement of the drone in 3 dimensions. A simple ‘tap’ gesture in the air makes the drone land back down. And the best bit: a ‘circle’ gesture makes it do a barrel roll!

Here’s our demo from the end of the event:

 

Nodecopter London from Markus Kobler on Vimeo.

All in all, it was surely the geekiest event I’ve ever attended, but also one of the most memorable!

NodeCopter events continue to be staged across the globe, so if you’d like to attend one yourself, be sure to keep an eye on the Upcoming Events page.

The Third Dimension - an introduction to WebGL and Three.js

Earlier this month I gave a talk at the London Web meetup to introduce developers to the world of WebGL and the 3D Web.

WebGL can be pretty daunting at first, for those of us without a background in OpenGL or 3D programming. So I want to help other developers know how to get started.

In a WebGL-capable browser (I recommend Chrome on the desktop), you can check out my slides here:

http://third-dimension-webgl-threejs.herokuapp.com/

I start by sharing some examples, then show what raw WebGL code is like, without a library. It’s really low level and far more code than most of us will want to write! So then I introduce Three.js, a high-level 3D graphics library that makes it a lot easier. Then I share some simple code you can use to create things like spinning 3D dinosaurs and animated robots!

If you’d like to see the recording of the talk, here are the links, though please note only one corner of the slides are visible in the video, so you may wish to click through the slides yourself at the same time to follow along:

Part 1: http://vimeo.com/58140968

Part 2: http://vimeo.com/58263170

Augmented Reality for Web Developers

Whatever happened to Augmented Reality? There’s been a lot of hype, but has it resulted in anything useful yet? According to the Gartner Hype Cycle, expectations for AR have already peaked and it’s about to move into the Trough of Disillusionment.

However, I’m still excited about AR. In fact, I’m more excited about it now than ever before. Why?

  • We’re getting close to consumer-ready wearable AR devices, such as Google’s Project Glass. In a recent interview on the Gavin Newsom Show, Sergey Brin said he’s hopeful they could actually come to market next year. I think that these kind of devices could open up a whole new age for AR.
  • I believe in the power of the Web. Up until now, AR has only been possible through native apps (or Flash). But now it’s opening up to Web developers and it’s possible to build AR apps using HTML, CSS and JavaScript.

A couple of weeks ago I gave a talk on this subject at the London Web meetup group. It covers Wikitude’s ARchitect platform (a bit like PhoneGap for AR) and WebRTC (an emerging standard for working with real-time communications through the Web).

A couple of people have asked if I’d share my slides. They’re web slides, based on the HTML5 Rocks slide deck. But I changed the design, added media queries and embedded a live WebRTC demo inside. If you haven’t seen an augmented reality dinosaur in your browser before, I encourage you to try it out!

Some quick points to note first:

  • Best in Chrome or Safari. To see the AR dinosaur demo, you’ll need Chrome Dev or Chrome Canary (see: http://www.webrtc.org/running-the-demos)
  • Best viewed in a 4:3 ratio (the ideal is full-screen at 1024x768 - the resolution of the projector)
  • It contains lots of big images so make sure you’re on a broadband connection
  • Press the keyboard left + right arrow keys to navigate (or swipe left/right on an iPhone or iPad)

Here’s the link:

http://augmented-reality-for-web-devs.herokuapp.com

And I’ve shared the code on GitHub: https://github.com/poshaughnessy/augmented-reality-for-web-devs

There’s not too much text in the slides so if you would like a bit more context, the talk was also video recorded and the video is up - thanks to Nathan - at:

http://vimeo.com/43317655

I welcome any comments!

Web First, Hybrid Second, Native Third

If you want to create a mobile app, one of the big questions you need to answer early on is: Web, hybrid, or native?

There’s no one-size-fits-all answer; there’s a lot to consider. But how do you go about making the decision? Which option should you consider first? I propose Web First. Only if the Web alone won’t do, consider Hybrid Second. Finally, think about Native Third.

Web First

The best reasons to go with the Web can be summed up as: portability, shareability and updatability (I may have invented one or more of those words, but you get the idea!).

1) Portability

Not all applications need to work on multiple platforms. If you’re setting out to write an iPhone game, it’s a valid choice to just target iPhones. But for most application developers, we need to consider multiple platforms. How do we cater for people with Android, Blackberry or Windows Phones? This is a problem that’s growing fast. In 2009, Android’s market share was 4%. Now it’s over 50%. And Nokia are expected to sell 37 million Windows Phones in 2012

Avoid locking yourself into one vendor by using open, standardised technologies that can work for many. Avoid writing two or three separate applications with different codebases and having to maintain them all separately. Write one app to rule them all!

The counter-argument to this is that it’s not easy to get Web apps working perfectly across different types devices. For anything beyond the most simple application, you’re going to find things you need to tweak for each device you test. You’re likely to need various tools and techniques such as modernizr and media queries. You may even need to factor in devices that don’t support these very tools that should be easing the process. It’s likely to be painful for the foreseeable future. But in many cases, it should be worth it.

As well as different vendors, there’s also the question of different device sizes. We already have mobiles, tablets, ultrabooks, laptops, desktops, Internet televisions… and there’s no telling where the variety will end. The Web is the best way to reach all these different types. Once again, it won’t be easy. You’ll need different sets of styles and lots of tweaking. You’ll probably end up reading a lot about Responsive Design (and perhaps like me you’ll think we’re not quite there yet when you load the much-hailed Boston Globe on your mobile phone and find one very, very long column). But… it’s possible to cater for this variety and you can’t say that for anything else but the Web.

2) Shareability

URLs are underrated; they’re the Web’s killer feature. They make it easy to share your application, or even a specific page or part of it.

Native apps are really missing out. I saw an advert on the tube today that advised those looking for their app to go to the App Store and search for “Parker Car Service Smarter Minicab Booker”. This is not a good way to point people to your app. Okay they could’ve used a URL (iOS devices load itunes.apple.com URLs in the App Store), but the reason they didn’t is that it would be pretty confusing for customers.

Say we bothered to tap in that big long search query and we’ve now found the App Store listing for our app. Now we have to download it. This isn’t a big hassle if we’re going to use the app a lot. But nearly 30% of apps downloaded are used just once. Compared to just clicking a link, that’s a lot of effort to go through if you’re going to use it once and throw it away.

As well as URLs, it’s the ubiquity of the Web that makes it so shareable. All smartphones have a Web browser. Not all smartphones have a particular native app installed. This is particularly important for sharing on social networks. I can easily point my friends to a Web link and they can load it up and consume the content within their Twitter or Facebook applications. Twitter and Facebook can embed a Web viewer within their apps because the Web is ubiquitous and non-proprietary.

3) Updatability

Releasing updates to native apps is a pain. You have to go back through the app store release process. For iOS, that involves re-submitting to Apple and waiting a couple of weeks for them to approve it (or they might reject it).

With the Web, you can just push out the new version at your convenience. You can be more responsive to feedback, fix bugs quicker and generally keep your content up to date much better.

Native apps are a pain to update from the consumer’s perspective too. On my iPhone, the App Store always has a big red number next to it, glaring at me for not updating my apps more often. Scott Hanselman called it “feeding the update beast”. For big apps, you may need to wait til you get on wifi. Native apps do allow you to potentially stick with a specific version, whereas with a Web app you’re forced to update. But effectively you’re forced to update native apps too, because if you don’t, you’ll just have a notification glaring at you for eternity.

Hybrid Second

The Web alone doesn’t work for you? Okay let’s move onto Option 2: “Hybrid”. This is a kind of mish-mash of Web and native; basically, wrapping a Web app within a native app. Tools like PhoneGap are very popular and make this pretty easy. Some reasons for doing this are: extra features, payments and discoverability.

1) Extra features

The biggest reason to put your web app within a native wrapper is to add native features that you can’t implement with the Web. For example, integrating with the camera or the contacts book. 

I won’t try and list all the things you can and can’t do yet with the Web, but it’s worth saying for the benefit of us future-gazers that the Web should catch up with a lot of these features. The Device APIs Working Group is working towards this, but unfortunately it’s been rather slow-moving. Mozilla are hoping to fast-track some of it through their WebAPI project, which we should see something from quite soon.

Of course, the Web will never be able to do everything that all devices can do natively. Proprietary features can be made available quicker. Shared standards evolve slower. So there will always be reasons to develop particular features with native code. I predict though that more native coding will become simply add-ons to Web codebases; less apps will be written wholly in native code.

2) Payments

One advantage that native/hybrid apps have, at least on Apple devices, is that people are quite happy to pay for them. Apple have made it as easy as it can be, with a one-click-plus-password method. It’s just the same process whether you’re downloading a free app or one that costs money, so there’s no extra steps to put you off.

I don’t think we’ve really seen this level of ease come to the Web yet. However, it’s not a complete win for the hybrid/native approach. It’s worth remembering that you’ll pay Apple a lot for the privilege. If you are able to roll your own subscription or payment method, you could save a lot by not having to pay Apple a 30% cut.

3) Discoverability

App stores do provide a great way to discover apps, but I think this argument can be a bit over-stated. Let’s not forget the Web’s powerful feature, the URL. An address like app.ft.com is easy to remember and share. The FT replaced their native app with a Web app, outside of the App Store. By breaking the million user mark, they’ve proved that this model can be successful.

Native Third

You’re still here? Can’t do what you want with Web or Hybrid? Let’s explore the third option then: writing the app purely in native code. I’ve bundled some reasons to go down this route into: Performance and Other Considerations.

1) Performance

Performance is another argument for native apps that I think can be a bit over-stated. As Cut The Rope’s ZeptoLab state: “JavaScript now can execute at near native speeds”. We’ve seen other successful native games such as Angry Birds brought successfully to the Web browser too. Of course, desktop browsers can perform better than their mobile counterparts, but it surely won’t be long before mobile browsers are as fast as desktop browsers are now. For the most performance-hungry apps though, for example fast, 3D games, then native is probably the best choice. (This could change in the future when a significant number of mobile devices support WebGL).

2) Other Considerations

Other reasons for choosing pure native? If you need a lot of native features or if they’re the core parts of the app, then maybe it would be messy, or simply unnecessary, to write some bits using Web technologies.

Offline capability is perhaps the most misleadingly quoted reason to choose native. People instinctively think of the Web as being connected all the time, but it is possible to store Web apps (both the data and the apps themselves) offline, using HTML5. However, it is true that native can provide more storage and greater capabilities in this area. Also, being early days, it can be a bit tricky to work with these HTML5 features at the moment.

Finally, not all developers are Web developers and if you’re just more comfortable with Objective-C than JavaScript, plus you’re happy to miss out on all the advantages listed above, then you’re okay to stick with what you know!

Summary

I believe that the Web should be the default choice for applications. It’s the most portable, flexible and accessible option. It’s not easy, but it will get easier. If you can’t achieve what you want with a straight web app, the next choice is hybrid. Finally, there’s the option to go purely native.

To convey this visually, I’ve created a flow chart. It’s a very simplified view and shouldn’t be taken too seriously, but I hope you like it:

https://docs.google.com/drawings/d/1edfygfJwahmZSLC3WIdHyqfd6QsspHj8UQHaXcd9MPI/view

Have I missed any key points? Been unfair about anything? If so, please comment and let’s discuss!