The Future of the Web and What It Means for Education

[Cross-posting of my blog post at labs.pearson.com]

What will the future have in store for the Web? Hundreds came together in London a couple of weeks ago in the hope of finding out, at the aptly titled Future of Web Apps conference. The talks were aimed at web developers, but the topics will end up affecting everyone. So what were the major themes? And what does it mean for education?

image

Credit: Future Insights

Theme 1: More Powers for the Web

Those that create mobile applications will be familiar with a long-standing “Native versus Web” mobile app debate. Native apps are those that are developed for individual platforms, such as Apple’s iOS or Google’s Android, and are generally downloaded via app stores. Web apps can be accessed with a web browser just by typing a URL, selecting a link, or tapping on its icon if you’ve bookmarked it.

Web apps are great because they have all the benefits of the World Wide Web: being open and non-proprietary, universally linkable and able to work on all kinds of devices. But users tend to prefer native apps. The opening speaker, Bruce Lawson from Opera, referenced a number of examples and data that reveal users spend significantly more time inside native apps than web apps. Why is that?

Offline first

The biggest reason is probably because web apps don’t currently work well offline. The FT’s web app is a great exception to this general rule, but it’s rare to see offline web apps because it’s currently very tricky to do and there are lots of limitations.

Those who make our web browsers are working hard to change that. By the end of this year, developers will be able to start using a new feature called Service Workers which will give web apps lots of new powers to work offline (and more). Websites will be able to be offline by default, and - like native apps - update if and when there is a connection. It should also allow websites to fetch updates in the background and notify you when something important happens (with your permission).

Imagine students and teachers being able to continue interacting with their school’s Learning Management System on their mobile phone, when they no longer have a signal (for example, on an underground train). These capabilities should be good for all of us, but it’s especially good news for parts of the world where Internet connectivity is sparse. As Bruce Lawson reminded us, “most of the world doesn’t have connectivity”.

Performance

Another advantage you may have noticed for native apps is that they often feel smoother as you scroll up and down or swipe around; they tend to have snappier animations and transitions. Again, this is an area that the web is trying hard to catch up on. Bruce informed us that for Blink - the engine behind the Opera and Google Chrome browsers - all of the priority for 2014 has been performance on mobile. Opera are especially focusing on less-powered phones, because “not everyone has the latest shiny iPhone”. This work should help us to deliver better mobile web experiences to learners around the world.

Graphics and gaming

The web is also becoming a platform for high-end gaming and immersive, interactive experiences. Using a technology called WebGL, web apps can now provide near console-quality graphics. Up until now WebGL has largely been the preserve of desktop demos, but support has just arrived on iPhones and iPads with iOS 8, so we should see it start to take off more on mobile now too.

If you would like to get a sense of what is possible, there is a large set of WebGL demos here: http://www.chromeexperiments.com/tag/webgl/. (Technically the site is for Chrome demos, but most of them should work in various other browsers too. That’s the beauty of open standards!).

If your mobile phone supports WebGL (you can test this by visiting get.webgl.org), then look out for the mobile symbols indicating those that should be mobile-friendly. A demo that I particularly like is Racer S.

image

Credit: Samsung Racer S

Audio

We can also expect the web to make better use of audio in the future; the topic of Web Audio - a powerful system for generating and controlling audio - came up in multiple talks.

If you’d like to test it out, here’s a demo that you can try in Chrome on the desktop: http://www.jamwithchrome.com/

“Web+”

Overall, these features should provide the Web with more of the capabilities that users have come to expect from native apps. Bruce termed this “Web+”. The best of the Web, plus the best of Native? It should be a powerful combination…



Theme 2: APIs everywhere

The future of the Web should also see a continuation of the rise of APIs. APIs are services that developers can use to help them build their software. They might provide data, content, or functionalities that save time and effort. For organisations that provide them, APIs can make their services available to new audiences, and allow innovative new uses that they might never have imagined.

With APIs becoming ever more ubiquitous, Ian Plosker said in his talk that “Web 3.0 is about APIs. If software is eating the world, APIs are eating software”.

In the world of education, this should lead to learning services that are better integrated, more connected, and more open to innovation.



Theme 3: The Extensible Web

Another common theme was that we’re all responsible for defining the future of the Web. It won’t simply be dictated to us by those who make web browsers. The Web is based on open standards that we can all contribute to.

However, in the past, browsers have sometimes provided features that were too “magical”; they didn’t allow web developers enough control and they ended up being limiting and frustrating. (The best example of this is the previous attempt at solving the offline problem for the Web: AppCache).

We heard that there’s a new approach called the Extensible Web. Browsers will concentrate on providing comprehensive foundations, which developers can build things on top of. Once developers start settling on approaches that make the best use of those foundations, some of these additions may also be standardised and built into the browsers themselves.

In practice, this should help the capabilities of the Web to advance more quickly and more smoothly. This should be great for developers and pay dividends for users of the Web too.



The future of the Web? It’s bright

image

Credit: CLUC, Flickr

In the near future, watch out for websites that retain all the great benefits of the World Wide Web, but also have more of the things we love about native apps; for example, offline capabilities and console-quality graphics. We can expect better quality, more mobile-friendly education websites, and learning services that are more connected and more flexible through the use of APIs.

And since the Web is open and extensible, we can all have our own say in its future too.

Interfaces of the future, and how to hack around with them now

On 31st July I gave the following talk at Front End London. The slides are here.

I’d like to talk today about some of the new kinds of interfaces that are on the horizon and may be taking off in the next few years. And for a couple of the devices that I’m finding most exciting at the moment, I’ll introduce you to hacking around with them right now, using Web technologies.

Luckily I get to explore this kind of stuff in my day job, as a Developer in the Future Technologies team in Pearson, the world’s leading learning company. (Some of you here may know it as the parent company of the Financial Times).

First, let’s think about where we are now, in the world of the Web.

image

Credit: Jeremy Keith

We’ve moved on from the Desktop Era, and having travelled through the Age of Mobile, we’re now in the Multi-Device Era, where we no longer have neat categories of devices (“smartphone”, “tablet”, “desktop”), but a continuum of different screen sizes and an assortment of touch-screens and non-touch screens.

But in a way, all these devices are still kind of the same. They’re all flat, 2D screens that we have in our hands, or just in front of us. As Bret Victor memorably said, it’s all just Pictures Under Glass.

image

Credit: PlaceIt

That surely can’t be it… So what’s next?

Well I’m sure we’ve all been hearing a lot of hype about wearables recently. Interest in smartwatches has surged since 2012 when Pebble (on the left here) became the first $10m Kickstarter campaign.

image

Credit: Kārlis Dambrāns

Now Google, Samsung, LG, Motorola and many others are getting in on the act, and of course we’re all waiting to see what Apple may or may not reveal later this year.

And who could forget the poster child of geeky new tech, Google Glass?

image

Credit: Thomas Hawk

If you haven’t already tried it out, I’m sure you’ve all heard lots of opinions about it already.

You might be, quite rightly, feeling a bit skeptical about these wearable devices taking off. Are people really going to want to use them? Are they actually useful? How many people would actually be happy to wear things like this out in public? And you might be thinking that they aren’t really that different to what we have now. Even if they do become popular, aren’t they merely additions to the Multi-device Era we’re already in?

I think those are all very reasonable thoughts, when looking at the wearables space right now.

But I’d like to offer up a couple of reasons why I think that certain kinds of wearable devices could become a very big deal in the near-future…

Different devices, different experiences

The first thing I’d like to say is that, as with smartphones and tablets, the differences in these devices aren’t merely in the sizes of the screens. It’s in how we use them and the different kinds of experiences that they lend themselves to. For example, when the iPad was announced, a lot of people dismissed it as just being a “big phone”. Too big to take out with us all the time, and why would we need them at home because we already have our laptop there? But it turned out that they can make for a great “lean back” device, something that we’re more comfortable using on the sofa.

image

Credit: plantronicsgermany

So I think we should think carefully about the kinds of experiences that new devices might lend themselves to as well. 

Especially because some of the ways that we end up interacting with new technology are often difficult to predict. Before the smartphone explosion, who would have predicted that so many of us would use them for this:

image

The “Selfie”. Credit: Wikimedia

The Long Nose of Innovation

And I’d like to talk about being patient… Disruptive technology doesn’t take off as soon as it’s been invented. New types of devices start off in research labs as big, clunky, expensive things. Then they go through years of refinement and augmentation until eventually everything comes together: affordability, ease of use, good marketing… and finally they can take off and gain traction.

image

Credit: Sketchplanations

We’ve seen this many times over the years. For example, multi-touch interfaces have been around in some form for decades, but didn’t really take off until the iPhone. And it’s the same story with tablet computers and the iPad.

And we all kind of know this, yet we still seem to go through this hype curve every time:

image

Credit: Wikipedia

I’m guilty of this too. We hear about a cool new technology, and we get really excited about it, and then we try it, and it lets us down. It doesn’t meet our expectations, and our instant reaction is that the whole thing is a load of rubbish. But gradually as the tech becomes more refined, we start to understand more about what they’re good for and what they’re bad for, and eventually they just become another part of everyday life.

So bearing all this in mind, let’s take a look at a couple of upcoming paradigms that I think could be genuinely disruptive in the next few years…

Augmented Reality and Holographics

image

Credit: Pearson School of Thought

Firstly, holographic-style augmented reality interfaces, where you can reach out, create and interact with virtual content in 3D, Tony Stark -style. Basically, future AR displays combined with future Leap Motion-style sensors to let you manipulate things with your hands in natural ways. This could bring the real world and the digital world a lot closer together. Imagine using this to collaborate with people to design and create things - each being able to see what each other is painting in the air…

It might seem like this is still a long way off…But it shouldn’t be long before we can start to try it out at least. Meta are planning to bring “the first holographic interface” to market next year.

image

Credit: Meta

This Pro version will be $3,650 and it’ll be attached to a pocket computer. So there’s a couple of reasons already why it’s unlikely to shoot up that traction axis straight away. But some smart people predict that it will only be 5 years before it becomes this:

image

Credit: Wikimedia

Just regular looking glasses or shades. That could really help to open it up to the mass market.

Virtual Reality

And how about Virtual Reality?

image

Credit: Sudhee

Again, people have been talking about it for decades. But it should only be next year before we have the first affordable Virtual Reality consumer devices go on sale, like the Oculus Rift:

image

Credit: Oculus Rift

The unique thing about VR is the feeling of “presence”; you’re transported into another environment. Go to the edge of a cliff in virtual reality and you should find that you get sweaty palms and a quickening heart beat, like you would in real life.

image

Credit: Fotolia

Of course, your conscious brain knows that you’re just wearing what is effectively a pair of clunky ski goggles. But enough of your subconscious brain is tricked that it can feel like you’re actually immersed in another world…

Here’s just one interesting example: hooking into a live camera feed on another person, to enable a person in a wheelchair to see herself dancing on her feet:

image

Credit: BeAnotherLab

We’re only really just scratching the surface of it right now, but we have the potential to create some amazing experiences for people, which can lead to reactions like this:

image

Credit: Paul Rivot

WebVR

This is why I think Virtual Reality is exciting, and it’s an especially exciting time for us Web developers right now. Because just in the last few weeks:

  • Apple finally embraced WebGL, a key technology for creating 3D experiences in the browser
  • And Mozilla and Google have both released special builds of their browsers, with initial support for Virtual Reality

This is what they’re implementing:

  • The ability to discover available Virtual Reality devices (in practice just the Oculus Rift right now, but more will be coming…)
  • Full screen extensions so you can request an element goes full screen on the VR headset
  • Sensor integration so you can use for example, the orientation of the device
  • And the particular distortion effect required for rendering on different VR devices - you should be able to be hardware agnostic

Google are calling this “WebVR” (Mozilla don’t seem to be naming it anything in particular yet). It’s at “version zero” and it’s not even in the alpha channels of the browsers yet; currently you can only get this in separate builds.

Here’s how you use it… With a WebGL scene, you render it twice, side by side: one for your left eye and one for your right eye.

image

Credit: Oculus Rift

The browser can apply the distortion required for the particular device - it’s like this for the Oculus Rift:

image

Credit: Oculus Rift

This is what the lenses will turn into something that covers as much of your vision as possible.

As for CSS3D content, it should be even easier because it’s declarative, so you can leave it up to the browser to figure out how to render it. You should just need to use ‘preserve-3d’ and set the ‘perspective’, then just request that your containing element goes full-screen on the VR device. That’s the theory anyway: Mozilla are working on this now, but I haven’t seen any demos of it yet [Update: as of 31st July, Mozilla have released new builds with preliminary CSS integration].

We’ll stick with WebGL and I’ll show you just the key pieces of code we need to add. Warning: these APIs are brand new - they will undoubtedly change.

if( navigator.getVRDevices ) {
  navigator.getVRDevices().then( vrDeviceCallback );
}

This is the discovery bit. [Update: Chrome and Firefox now both use promises].

function vrDeviceCallback( vrDevices ) {
    for( var i=0; i < vrDevices.length; i++ ) {
        // If instance of HMDVRDevice...
        // If instance of PositionSensorVRDevice...
    }
}

In our callback we can check it’s a Head-Mounted Display and also see if we can get sensor data out for the orientation.

var leftFOV =
    vrHMD.getRecommendedEyeFieldOfView('left');

var leftTrans = vrHMD.getEyeTranslation('left');   

For each eye, we can ask for the recommended field of view which we can use to set the right camera projection, and also the translation to apply, as in how far apart the cameras should be.

if( canvas.webkitRequestFullscreen ) {
    canvas.webkitRequestFullscreen({
        vrDisplay: hmdDevice });

} else if( container.mozRequestFullScreen ) {
    container.mozRequestFullScreen({
        vrDisplay: hmdDevice });
}

And we call requestFullscreen, passing in our VR device. (Note that for Chrome “Fullscreen” has a small ‘s’ and you need to do it on the actual actual WebGL canvas element. For Firefox, it’s a big ‘S’ and their example calls it on the element containing the canvas).

Now you just need to add your usual WebGL goodness. I used the popular Three.js library. I also like dinosaurs, so I added a dinosaur thanks to Dorling Kindersley. Plus a sky map from eyeon Software. And I made this…

I’m hoping to release the code, but I haven’t been able to yet. Brandon Jones from Google has the code for his demo up here though, plus be sure to check out his blog post.

Google Cardboard

Also just in the last few weeks, Google unveiled Cardboard which, as it sounds, is literally made out of cardboard, but it just takes a couple of lenses and a button made out of a magnet, and it can turn your existing smartphone into a rudimentary Virtual Reality device for just a few dollars.

image

We can also create Cardboard apps using Web technologies, right now. It’s not supported by these very new WebVR implementations just yet, but because it’s essentially just a phone, we don’t actually need WebVR to be able to get it to work.

In fact, Three.js has a StereoEffect we can apply, which makes it easy to render the same scene for both eyes side by side:

var effect = new THREE.StereoEffect( renderer );
...
effect.render( scene, camera );

And Three.js also has a controls module that uses the standard HTML5 orientation API in order to render things according to the orientation of the phone:

var controls = new THREE.DeviceOrientationControls(
        camera, true);

controls.connect();
...
controls.update();

Here’s what that looks like:

Again, unfortunately I can’t share the code for this right now, but Google have a code example up here.

Just a taster

image

Hopefully I’ve given just a taster about some exciting new technology coming up, and how you can get started with Virtual Reality right now. 

I’ll leave you with this thought…

Today, we’re creating pictures under glass. 

Tomorrow we’ll create whole new worlds.

image

Credit: mind-criminal

So let’s get ahead of the curve and start hacking now!

Thank you.

"The Full Spectrum Developer"

This week I was invited to attend a talk by Michael Feathers at the offices of News UK. The topic was “The Full Spectrum Developer”.

Taking inspiration from Laurence Gellert’s post "What is a Full Stack Developer?”, Michael talked about how we should broaden our horizons. Don’t just master one small domain; try to understand a bit about every aspect of your industry, from business needs, to hosting, to user experience. That way we can contribute intelligently and reduce the communication cost between separate teams.

It was a thought-provoking talk, so I wanted to share my notes:

  • There’s a wide span in skills across the industry. What motivates people to go beyond?
  • Curiosity? What does it mean? How do you nurture it?
  • “The Full Stack Developer” understands the whole stack, as well as the business
  • Silos don’t work well due to the communication cost
  • The socio-dynamics of different teams can make things go crazy - “the designers don’t understand about development” - “the developers don’t understand about design”… We have to move away from that
  • Don’t get yourself caught in a silo
  • The Full Spectrum Developer knows about:
    • Server, network, and hosting environment
    • Data modelling
    • Business logic
    • API layer / Action Layer / MVC
    • User Interface
    • User Experience
    • Customer and business needs
  • Know it all the way up and down the chain
  • David A. Thomas is a great example of a full stack developer - knows everything from microprocessor coding to employee retention
  • Bjarne Stroustrup built C++ for himself and became the father of a language
  • Robert Fripp continuously reinvented himself
  • Be distracted sometimes
  • “I can see this barrier. Can I go around it? No, I’ll break through it”
  • "Be the stupidest person in the room" - if you’re going to transition to new things, get used to this
  • Start asking questions early on in the conversation
  • Never underestimate what you know - we all have things to contribute
  • “The people who do well are people who read” - reading is fundamental
  • Cells were the inspiration for Object Oriented Programming
  • In SmallTalk, everything is an object, all the way down
  • Alan Kay, creator of SmallTalk, is great with metaphors - take a metaphor from one part of the world and apply it to another
  • One that didn’t really take off is Lucid, “the dataflow programming language” - uses the metaphor of fluid dynamics - powerful
  • APL is a programming language that uses non-ASCII characters. Funky!
  • J is derived from APL - Quicksort in one line!
  • Challenge yourself - cultivate 3 conceptual interests outside of work
  • E.g. Functional programming - pick something outside of your expertise
  • Push the edges - try it out
  • Different languages - build a repertoire
  • Recommended books:
  • Learn what you need to do your job well, but take distractions - be curious
  • Time-box it - don’t think of it as a chore

Awesome Mobile Animations

Earlier this week I gave a talk at the EdTech Developers Meetup on "Awesome Mobile Animations".

It’s about the kinds of fluid animations that native apps are increasingly using, and the fact that us Web developers should try to up our game and make our animations better too. It gives examples that use CSS3D, Canvas and WebGL. Then it runs through some performance tips.

If that sounds interesting then you might like to check out the…

Blog post: www.geeking.co/awesome-mobile-animations/

Slides: awesome-mobile-animations.herokuapp.com

Source code: github.com/poshaughnessy/edtechdevs-awesome-mobile-animations

Turning hacks into products: Lessons from Let’s Code!

[This is a cross-post of the discussion topic on geeking.co]

Phase 0: The idea

You’ve probably been hearing quite a lot lately about children learning to code. Interest in the topic has exploded recently, with initiatives such as the Hour of Code and new applications and kits designed to help teach programming coming out all the time. This is a post about the process we went through to create our own code-learning web app, Let’s Code! Along the way, we learned that it’s great to hack-start a project, but turning prototypes into products isn’t easy…

The story begins in April 2012. At this time there was an increasing amount of discussion here in the UK about the ICT (Information and Communications Technology) curriculum needing to be modernised. Consensus was growing that we were failing kids by teaching them only how to use software (like Microsoft Office) and not how to create it. The existing curriculum lacked the potential for creativity, it bored students and it put them off studying Computer Science in higher education. The result was an increasingly concerning skills shortage.

This troublesome situation wasn’t lost on our colleagues, who suggested that we - Pearson’s Future Technologies team - might try to do something to help. Since we’re a central R&D team inside the “world’s leading learning company”, it’s our job to prototype new concepts and explore new technology that may affect education. At one of our bi-annual “Future Technologies Champions” meetups, where we come together with our colleagues to generate ideas for, and decide on, our next projects, this idea was voted top. So it was decided: we would create an application to address the IT skills shortage and to help make ICT fun again.

 

Phase 1: Hack-starting (pre-alpha)

We decided to “hack-start” the project, using a hackathon-style format internally to kick things off quickly. We came together with some of our expert colleagues, including an ICT subject advisor, and hacked away together in a room for two days. Over the course of that time, we designed the basics of the app and created the first, quick prototype.

Some of the decisions we made in this short period of time were:

  • To make it as visual as possible and to foster creativity by allowing young people to create their own applications

  • To base it around objects that have properties and can move, as part of real life scenarios that young people can understand and relate to. For example, the long jump in athletics (the London Games were just coming up back then!)


We spent most of the first day figuring out what we were going to do, but by the end of the second day we had hacked away with Easel.js and created a working long jump demo.

image

Although it was basically useless as an actual application at this point, it was really useful as a starting point, to convey what we were hoping to create (both to ourselves, and anyone we spoke to about it).

 

Phase 2: Prototyping (alpha)

Following the hack, we archived that code and started developing again from scratch. (There’s not much harm in throwing away just over a day’s worth of messy code!).

Over the course of the next few weeks, with help from Phil Powell who joined us for a few weeks as a contractor, we built an alpha using Backbone.js. It featured most of the core features for the app. It allowed objects to be added to the stage. You could edit their properties and see the effects. You could make events trigger things (e.g. hooking up a button click to make the athlete start running). It included key programming concepts: objects, properties, methods and events. We also had a couple of tutorials to guide the user through getting started, although you could go off-script and do your own thing too.

We got the site deployed for anyone to try out and made the code public on GitHub. We felt that we had conveyed the concept and we started to get some people excited about it.

It felt like our hardest work had been done.

The main problem was that there was no server side component at all. You couldn’t save your project and if you refreshed the page, you’d go back to the beginning! Also, we hadn’t spent much time on cross-browser testing and it didn’t work in Internet Explorer.

image

image

 

Phase 3: Productizing (beta)

We were all really keen to see our prototype live on and we wanted to learn as much as we could about developing an open source project, to see what lessons we could pass on to other business units in Pearson.

So we decided to spend more time to “productize” it, i.e. add the rest of the features it needed to be a minimum viable product. We’d need to add a server (we chose Node.js) and a database (we chose MongoDB) so we could store users and projects.

We’re used to creating prototypes (23 so far, and counting!) but this was Future Tech’s first experience trying to actually extend one, beyond our usual 8-10 week timeframe.

Naively, I wasn’t expecting it to be too difficult. I thought that we could simply build out the alpha and add more features. I did anticipate that we would need to do some refactoring as we went along, but I didn’t expect it to be that much work.

Starting again

Because we’re a very small team (just two developers) and always busy working on multiple new prototypes once, we had Edward Ruchevits - who had just joined us as a developer intern on a year out from university - take on most of the work on this productizing effort.

Edward is a super-smart and knowledgeable developer, but we really threw him in the deep end on this one! I think that, after being employed as a developer for a bit over a decade now, I had forgotten how difficult it is when you first start to work with other people’s code. Edward was also coding with most of the libraries and technologies (e.g. Backbone.js, Node.js, MongoDB) for the first time. Furthermore, as I was basically full-time on my next project, I wasn’t able to spend enough time with him. So, naturally, Edward didn’t get on well with our fairly messy alpha code. He decided to create a new version from scratch, confident that it would actually be quicker that way, because he would find it easier to work out how things were pieced together.

We were all concerned about starting over again, but we thought that we would be able to pull in code from the old version as we went along. Unfortunately, the codebase quickly diverged. As such, soon we weren’t able to pull in much of the old code at all. 

All this meant that a lot of the effort for the beta went into rewriting features that we already had in the alpha. That was obviously quite frustrating for all of us.

Edward introduced some great improvements though. He adopted Marionette which helped to structure our Backbone code better. He switched to using HTML5 canvas for the stage, instead of DOM elements. And he realised that we didn’t actually need some of the nastiest code from the alpha, and was able to remove it.

But gradually we realised that we’d really underestimated the effort required…

Ramping up

Soon we knew that we’d need some extra help. We asked a London agency called Adaptive Lab to work on it with us for a few weeks.

Adaptive Lab helped to bring a more rigorous approach to development. They were conscientious about writing tests, they conducted code reviews and they were great at mentoring Edward.

A little later we were also joined by ThinkGareth, who dropped right in with great expertise in the technologies we were using, and he helped us all a great deal.

Debugging and wrapping up

Coming to the end of the time we had available, we started concentrating less on adding/restoring features and more on debugging.

At this time, we noticed one particular kind of bug kept rearing its head…

We’re using an Event Aggregator (Backbone.wreqr) to de-couple the various components of the app. Instead of calling other modules directly, you can fire an event which other modules can listen out for and respond to appropriately.

This is great, except we kept running into bugs caused by accidentally leaving old event listeners lying around. It’s easy to do; Marionette Views automatically unbind events you’ve hooked up with listenTo (as opposed to on) when you close the view. But you can’t do this with the Event Aggregator events; you have to remember to turn the listeners off in the view ‘close’ methods.

The problems that this caused were very hard to debug, because it really wasn’t obvious from the fairly random-looking multiple-firing effects, what the actual causes were. There might be something that could help with this now though: Chrome debugging tools now have the ability to inspect asynchronous call stacks.

Eventually, we fixed all the critical bugs that we had identified, pushed out the source code to GitHub, and made the new version live.

 

Lessons learned

So what did we learn from this experience?

Hack-start for the win

Hack-starting your project can be a brilliant way to kick things off. It’s amazing how many important decisions can be made in such as short space of time. Let’s Code! evolved further along the way, but we pretty much had the core concept figured out after just two days. From the direction that gave us, it was relatively easy to then create our 8-10 week ‘alpha’ prototype.

Prototypes != products

But going from prototype to product is harder. Don’t treat this lightly - it should be approached just as carefully as any production build.

Compared to quick hacks and prototypes, doing things properly takes a lot more time. For production apps (especially open source ones, where it’s essential that external developers can get up and running as quickly as possible), you need: 

  • To think more carefully about the architecture

  • To adopt automated testing

  • To spend a lot more time on cross-browser testing and fixes

  • To write decent documentation

  • To use solid development practices, such as conducting code reviews


All of these things can slow down development, but they’re necessary to ensure the quality of the code, and therefore the app.

It’s hard to refactor, even harder to rewrite

Unless it’s just a few days’ worth of messy hack code, or you have a really good reason, prefer refactoring over rewriting (see also Joel Spolsky’s Things You Should Never Do!)


Experienced developers should lead the refactoring in a hands-on way, ensuring that newer developers are properly supported.

Get the infrastructure in place early

One of the things that’s a lot better with the beta is that we have a good infrastructure in place for things like linting, testing, building and deploying. Edward introduced Grunt for this. It was definitely worth putting this in place early - it’s saved us time each occasion we have deployed, and automated linting has picked up many issues along the way. Having a good set of tests in place is also invaluable if you’re doing lots of refactoring.

 

Test with real users early on

We benefitted from getting feedback from real users while development was still ongoing. Arun, our lead designer, surveyed a number of teachers and he visited schools to see it being used first-hand. It was very useful to understand their expectations and their reactions to the app. 

We haven’t been able to address everything at this stage, but as we are able to develop it further, it should help us to focus on what’s most important.

 

Try it out and grab the code

The Let’s Code! beta is now live for anyone to use. At the moment, the beta is still quite limited:

  • You can only create projects via the tutorials

  • There’s only three tutorials, based on a single scenario: the long jump

  • The actual JavaScript code for methods is not yet editable (although you can view it) 

However, we hope that it conveys the concept and you can imagine lots of ways that it could be extended.

It’s an open source project freely available on Github, to enable developers, including code-savvy teachers, to customise the app and add their own features, assets and tutorials. We hope to encourage the developer community to find fresh and creative ways to extend Let’s Code! and help more young people start learning how to code.

Whether you’re a teacher, student, developer, or just someone who’s interested to take a look, we’d love your feedback, so please try it out and let us know what you think!

image

image

April 2014: Highlights from Front End London and State of the Browser 4

Last week I attended Front End London and State of the Browser 4. Here are my highlights (what were yours?):

 

Bridging the gap between developers & designers

[Link to presenation slides and notes]

Kaelig Deloumeau-Prigent from the Guardian gave some insights into how they’re developing the new responsive Guardian website.

The numbers are impressive/scary:

  • About 16K lines of Sass (full compilation takes “a while”!)
  • 55 contributors to the GitHub project (all internal so far), 25 of those working on HTML + CSS
  • About 4 releases per day

So if their designers and developers aren’t communicating efficiently, they have a problem. That’s why they define their whole Design System using Sass variables. Colours, media query breakpoints, the grid system (padding, margins)… they’re all defined with meaningful names that will be used by both the designers and the developers.

They also created their own Sass mixin called sass-mq to help define their media queries in a more elegant way. It allows them to do things like this:

@include mq($from: tablet, $to: desktop) {
  ...
}

   

"Mobile Web is rubbish"

[Link to presentation slides]

Peter Gasston gave an entertaining talk, titled “Over-promised and under-delivered”, about how we need to up our game, because too many Mobile Web experiences are just rubbish!

Only 41% of the top 100 sites have an actual mobile site, and only 6% are significantly optimised in terms of page weight.

image

Some mobile sites that Peter singled out for particular condemnation were:

A site which tells you which way to hold your phone!

image

And a mobile site which tells you to go away “and visit our site on a real screen”!

image

(As Jake Archibald later pointed out, they obviously didn’t clock that this would be extra ironic, given that their desktop site lists “Responsive Design” as one of their specialist skills!)

For more hilarious/exasperating examples of poor mobile web experiences, see WTF Mobile Web and Broken Mobile Web.

It’s no wonder that people are spending more time in native apps and less time in the browser.

Peter then went onto discuss things that we should do, such as:

Use the meta viewport tag to set width=device-width and iOS 7.1’s new minimal-ui, which makes it automatically use ‘full-screen’ mode, i.e. hide the URL bar in portrait, plus also hide status bar and title bar in landscape.

image

But:

  • Don’t set user-scalable=no or maximum-scale=1! That’s an accessibility no-no. Think about people who are partially sighted.
  • A member of the audience said they’d done some user testing and with minimal-ui switched on, users didn’t know how to find the back button, or how to exit their app!
  • Apparently minimal-ui causes problems if you’re also using smart app banners.
  • Also, the viewport meta tag is not a standard and there are efforts to replace it with a @viewport CSS spec (unfortunately although it’s CSS, to ensure it’s picked up as quickly as possible, you are advised to include it inline!).

And hopefully the new “Picture 2.0” <picture> element standard will help with responsive images. There’s a polyfill called Picturefill which looks like it’s up to date with the latest spec. And it should actually ship in browsers in “a few months”.

 

Network: Optional (Service Workers)

[Earlier version of the talk here]

Always entertaining, Jake Archibald gave a great talk (mainly) about Service Workers (the new NavigationController which was the new AppCache!).

There’s a lot to it and it seems like something that will take time for people to fully understand and figure out the potential for.

Peter Gasston earlier described it as “like Node.js that works in your browser”. Apparently it can allow you to do background services that continue working even if someone leaves your site. This could be great for offline capabilities, e.g. checking when a connection comes back up and then syncing back to the server when it has. Also it gives you full control over caching. You get event listeners for requests and you can effectively hijack the response and decide to return something from your cache. And your cache won’t go stale automatically, it’s up to you to remove entries when you want to.

So a usual process will be: deliver your page shell from ServiceWorker, show a spinner, if you want to get fresh content, then show that, otherwise show your cached version, then hide your spinner.

You can also check requests to your APIs and look for a header (e.g. x-use-cache, something you define yourself) and return cached responses if the header is included.

ServiceWorker should move to a public draft and get an implementation in Chrome in the next few weeks (NB. currently there’s a flag to switch on Service Workers, but you can only test registering/unregistering!)

 

Open Web Apps

[Link to presentation slides]

Another great speaker, Christian Heilmann from Mozilla talked about Open Web Apps, i.e “the best of Apps and the best of the Web”. Essentially they are just web apps with the addition of extra Web APIs for device features and a manifest.webapp file used to make it ‘installable’, and useful for app marketplaces such as the Firefox Marketplace.

I didn’t realise but they now have integration with Android as well as Firefox OS.

Cordova now supports Firefox OS and there’s an article about porting existing web apps to Firefox OS here.

 

Update

These are just four talks from the many across the two events that I picked out as my personal highlights, but there were lots of others that you should check out too.

Take a look at www.frontendlondon.co.uk and browser.londonwebstandards.org/schedule/ for the slides/videos of the rest!

 

Arduino with JavaScript (Breakout.js)

On 19th March, I attended an Introduction to Arduino with JavaScript night class.

In 3 hours, Lily Madar guided us to create our first Arduino applications using Breakout.js, a JavaScript Arduino interface for the browser.

First, we just made an LED blink:

image

But soon we were playing with colour-changing LEDs, buttons and potentiometersIt was exciting to be able to create a custom, physical hardware interface for the browser.

For our final project, we had a choice. I chose an HTML5 Canvas Etch-a-Sketch. It was easy to hook up two potentiometers for drawing the line horizontally and vertically. And I included a button for erasing the picture.

image

My (messy) source code is up on GitHub.

The biggest issues I found with Breakout.js were:

  • The interface with the hardware is only live while your tab is open in the browser
  • Most of us had to keep restarting the Breakout server often while we were developing, due to weird errors

So it’s not for real, consumer applications, but it’s a cool prototyping tool and could make for some fun personal/office projects. For example, you could make an LED countdown clock, counting down to your next release. Or a set of build server traffic lights.

All in all, it was a really fun event. A fellow attendee also wrote up a nice blog post about it here.

Hybrid app workflow with Grunt, via XCode

These days, with web apps getting more complex, it’s getting more common to have have an automated JavaScript-based build process - including things like:

  • Running tests and linting
  • CSS compilation (from SASS/Less)
  • Combining and minifying JavaScript files
  • Single-command build and deployment
  • Live re-loading of changes during development


Grunt (like Ant for web apps) enables all of these and a whole lot more. It’s only been on the scene since 2012, but it seems to be exploding in popularity right now.

But how about using Grunt for a hybrid app?

I’ve been reading and talking about hybrid apps for a long time, but I’m actually just developing one for the first time now. Despite being a newbie, I thought it would be worth sharing how we’re setting it up.

NB. Despite the aim being to make the app as cross-platform portable as possible, this post is going to specifically talk about iOS (we’re only targeting one platform for the initial prototype).

The first thing I tried was - of course - PhoneGap. But I was disappointed with the standard workflow. I don’t want to have to run the PhoneGap build first, then load the resulting project in XCode, and then build and run the app from there. That makes the feedback loop for development too slow.

It might have been OK if we could have just tested the app in the web browser most of the time - if we just wanted to wrap a pure web app inside a native wrapper, or bolt on a plugin or two. But we need to develop a significant portion of this app in native code, so we need to be testing the actual native app very regularly. We don’t want to have two separate compilation steps. We need to build it and run it on an iOS device or emulator as quickly as possible.

It was about this time that we stopped looking at PhoneGap and started investigating how much work it would be to just write a UIWebView app with a simple iOS JavaScript bridge. I think we’ll probably go with the latter now, although I’m still wondering about PhoneGap (see below)…

So, what about Grunt? As I mentioned, we don’t want two separate build processes, so we need to combine the Grunt build process with the XCode build process. Thankfully we can do that quite easily with a Build Phase Run Script

A handy StackOverflow post told me this wasn’t too crazy an idea. I soon run into a problem though: the Compass SASS compilation failed. It was just a case of fiddling with the PATH and environment variables though. I’ve written up the solution as a self-answered StackOverflow post:

http://stackoverflow.com/questions/19710361/grunt-compass-task-fails-in-xcode-build-script/

So now our workflow is simply:

  1. Open up both our preferred web IDE (I use WebStorm) and XCode
  2. Edit the web code in the web IDE
  3. Do Build + Run in XCode.

Update

It’s now a few weeks later. Unfortunately we’ve since ditched this XCode-Grunt integration! For the following reasons:

  • We’re sharing our XCode project settings via Git, and we don’t have the same build paths.
  • For some reason it doesn’t update the JavaScript code until you re-build twice! I’m not sure why, but I guess it may be to do with the stage of the build process when the Grunt task takes place - perhaps it happens too late?
  • We’ve split up the work so my colleague is mainly working on the Objective-C side and I’m working mainly on the Web side. So far, my colleague hasn’t needed to update the Web code much, and I haven’t needed to run it up inside the native wrapper much.
  • I’ve realised it’s not actually that hard just to run grunt build separately first ;-)

Oh well… always learning!

Attempting fast 3D graphics for mobile web, without WebGL

Is it possible to create fast 3D interactive graphics for mobile devices, using web technologies? Since WebGL is not yet well supported on mobile devices, what technology should you use? CSS3D? Canvas? Or something else? And is there a high-level graphics library that could help you?

That was the subject of my presentation last night at the HTML5 CodeShow meetup.

image

I’ve been fortunate enough to be able to use Three.js for a couple of desktop web projects recently, and I’ve been very impressed with how easy it makes it to develop WebGL applications.

So when we were tasked with creating a new prototype mobile web application that may benefit from 3D graphics (an app for helping students to revise, called ZamBlocks), I jumped at the chance to try Three.js again. In this case, I wouldn’t be able to use its WebGLRenderer, but it also comes with a CSS3DRenderer and a CanvasRenderer. Both have good support on mobile devices. But then there’s the question of performance…

My presentation runs through the different things I attempted, to try to achieve a good frame rate on various mobile devices. Along the way, I ran into some big hurdles, but I also found a couple of optimisations that helped significantly.

And as it turned out, the final designs for this particular prototype didn’t really require any 3D elements or whizzy animations. (In fact, the whole thing turned out to be very simple. If I was to start from scratch, I’d probably just use DOM elements, or maybe <canvas> directly without a library). But since we’re an R&D team, it’s good for us to try pushing the boundaries and seeing what we can learn along the way. It was a great opportunity to try the other Three.js renderers and explore what’s currently feasible for mobile devices.

As well as Three.js, my presentation also briefly covers Pixi.js, a fairly new 2D graphics engine. A bit like a 2D version of Three.js, it’s built for speed. It will use WebGL if it’s available, but if not fall back to Canvas.

My slides contain lots of embedded examples and videos of how things look on mobile. You can check them out here (arrow keys / swipe to navigate):

http://speedy-web-uis.herokuapp.com

And the code is on GitHub here:

https://github.com/poshaughnessy/speedy-web-uis

The Impossibilities and Possibilities of Google Glass development

In the last few days, Google have released the API and developer documentation for Google Glass.

They also have some videos (such as the SXSW talk, plus these) to guide us through the capabilities.

I thought I’d put together a quick list of the Impossibilities and Possibilities for third party developers (as I see it, from the information so far):

The following are not possible:

'Apps'

You can’t develop ‘apps’ as such, or actually install anything on the device. But you can develop services through timeline cards. These cards can contain small amounts of text, HTML, images, or a map, but there’s no scrolling, JavaScript, or form elements.

Update: This isn’t quite true! It turns out it is possible for techies to install Android APKs - by plugging it in with USB and enabling debug mode, on the Explorer version of the device at least. See this post by Mike DiGiovanni:

https://plus.google.com/116031914637788986927/posts/Abvh8vmvPJk

Realtime picture/video, or voice integration

It’s only possible to tap into user’s images and video if they choose to share it through your service, after they’ve been taken. And it doesn’t seem possible for 3rd party developers to do anything with voice input. “At the moment, there doesn’t appear to be any support for retrieving a camera feed or an audio stream” (source)

Update: Except if you root it, of course! See:

http://arstechnica.com/security/2013/05/rooting-exploit-could-turn-google-glass-into-secret-surveillance-tool/

AR

Early discussions about Google Glass kept referring to it as an AR device. It’s not really AR at all. It doesn’t give you the capability to augment the user’s real-world view, except indirectly, through the small, fixed screen. (It’s actually less of an AR device than a mobile phone held up in front of your face).

Web browsing

"Users don’t browse the web on Glass (well, they can ask questions to Google but there is no API for that yet)" (Max Firtman)

Notifications

"We push, update and delete cards from our server, just for being there if the user thinks it’s time to see the timeline. It’s probable that our card will never be seen by the user… It’s not like a mobile push notification." (Max Firtman)

Eye-tracking

Early unofficial reports said there would be a second camera facing towards you, for eye tracking. From the official tech specs, it seems that’s not the case.

Update: I was right first time - it’s not mentioned in the tech specs (maybe they just don’t want to shout about it much right now?) but there’s definitely an eye tracking camera - that’s what enables ‘Winky’:

http://arstechnica.com/gadgets/2013/05/google-glass-developer-writes-an-app-to-snap-photos-with-just-a-wink/

Location, unless paired with Android 4+ phone

It was popularly reported that Glass would work with phones other than Android. But MyGlass, which includes the GPS and SMS capability, requires Android ICS or higher (source)

Direct revenue

There’s no charging for timeline cards, no payment for virtual goods or upgrades, and no advertising (source)

So what kind of services are feasible?

Services for often-updated content

To provide short snippets of content that the user will often want to have a quick glance at, to see the latest. For example, news headlines.

Update: You can also have short amounts of content read out for the user, using the “read-aloud” feature. See:

http://thenextweb.com/media/2013/04/25/the-new-york-times-releases-a-google-glass-app-that-reads-article-summaries-aloud/

Location services

To provide advice/information about nearby locations. For example, travel information or tourist destination tips.

Share services

For sharing your photos and video with your friends. Or sharing them with services (automated or not) that can do something with them and send you something back.

Simple communication / social networking

It’s possible not just to consume 3rd party content, but to reply with text or respond with selections. So reading and creating emails, text messages, Facebook status updates, tweets…  should all be possible.

To summarise…

The possibilities for third party developers are more limited than many hoped. But, there’s still an exciting amount to explore. And remember this is the very first API for the very first commercial device of its kind. (Compare it to the first version of the iPhone, which didn’t have an SDK or an App Store).

To quote Timothy Jordan, "It’s early days… We’re really just getting started".