At last — true grids in web browsers

I’ve moved over to medium – here’s my first post “At last — True grids in web browsers

Posted in Uncategorized | Leave a comment

Using reactive streams on serverless with cyclejs, xstream and Azure Functions

[ Update 2016-10-23 the code is now in the cyclejs community repo]

During development of my latest SaaS product, Brian, I’ve settled on a couple of key architectural decisions. For the Frontend I’m using Reactive programming (RP) with streams and for the backend I’ve decided on the ‘Serverless’ (FaaS) approach. Specifically, I’m using  Andre Stalz’s xstream with the incredibly light ‘framework’ cyclejs (but cyclejs supports other streaming libs, including the excellent RxJS) . Microsoft’s Azure Functions give a FaaS solution backed-up by many other options including BaaS & PaaS should they be required. This post looks at using them together on the backend.

Cyclejs

Cyclejs

Functions logo

Functions logo

 

 

 

 

 

 

 

 

I’m happy that the trade-offs and benefits with these approaches should meet my requirements. Namely, achieving  rapid development by focussing energy higher ‘up the stack’. I want to concentrate on innovation of user features and ‘business logic’ and not boilerplate or DevOps. From my early explorations I think Serverless and cyclejs manage to hit sweet spots of benefit and learning curve. However, they are most definitely not silver bullets, having wrinkles all their own that take time and effort to learn and overcome.

When I initially created Functions code using a traditional imperative style I rapidly found I missed the RP style I’d become familiar with when using CycleJS with RxJS. It’s a style that gets under your skin once you make the mental shift. Perhaps my background in real-time async communications predisposes me to seeing the benefits of asynchronous handling of streams. But whatever, I thought it would be fun to try it in the serverless context. At least both front and back ends would then be similar in architecture.

You may wonder why on earth I would consider using RP in a FaaS context. After all, the  FaaS architecture is all about small functions which are run once triggered and then quickly end. Thus, it would seem there isn’t much scope for streams when there is single trigger input event plus perhaps another data source or two.  One often touted advantage of RP is that it’s pure functions are easier to test, but that’s also a recommend practice with FaaS so that’s not an obvious advantage for having RP as well as FaaS.

One reason for wanting to use RP is that any non trivial functions are likely to have other asynchronous event sources, including SaaS requests and database updates over REST. Even so, there are other less tangible benefits of RP with cyclejs that I personally I found over imperative code style:

  • Loose coupling through reactive observers
  • Declarative style married to functional programming techniques
  • Separation of input, output side effects from the “pure” business logic

Together these engender a clean high-level way to describe program logic. Bugs also appear to be reduced and it also enables testing without excessive mocking due to the absence of side effects in the main code. Sometimes however, debugging can be more involved due to current tooling supporting imperative and not reactive. However, tooling is starting to appear as more turn to RP.

So what are the differences found when running xstream and cyclejs on Azure Functions environment compared to the usual browser (and sometimes nodejs) contexts?Surprisingly few it turns out. Fortunately, Functions builds on Azure Web Apps which supports nodejs and express. Better, it’s node 6.x that is provided which includes all those key ES6 features that really help clean up RP code. Another plus is that the cycle HTTP driver works fine on node.

In this implementation I’ve taken the approach of providing a cycle runtime in each Azure Function. Effectively, each Function is a component in the cycle sense of the word, though connections between components will have to be via HTTP, queues or other out of process couplings. This approach seems to be the a good choice as you can use Cyclejs or not for any individual function, depending on the complexity and preferences. As the Functions run time is open source there is scope to explore different and more deeply embedded approaches.

So without further a-do here’s the Functions driver code.

As with all drivers, the Function driver is there to handle useful side effects such as input and output. In this case it converts the Function inputs (“context” and an array of input bindings) into a source stream. It also sinks a stream containing the function’s output. This also acts as a signal that the function should complete (the driver calls context.done).  The sink also disposes of the streams created by run() for cleanup (this adds a little implementation complexity due to a forward declaration and JS’s lack of true pass-by-reference).

It turns out that using console.log is not useful in Functions, rather the alternative context.log is used. Thus, we also provide a Log driver that uses this channel. This is also used with xstream’s debug operator, which fortunately accepts a function argument as well as a value. I also decided the FunctionsDriver factory would return the log function itself as well as the driver. In this way nearly all the FaaS platform dependencies are encapsulated in the driver. This makes it possible to write a version for AWS Lamda or other serverless frameworks.

Here’s an example usage for a HTTP Function. It starts a 1 second ticker and on the 3rd tick makes a REST API request. It then returns the first item from the response in function output. The code demonstrates the use of all the driver features and the clarity of RP with cyclejs.

One issue that needs to be ironed out is sometimes exceptions such as syntax errors get lost and not presented in the Functions Logs. That’s probably xstream not re throwing captured exceptions. For now the fix is to put try…catch(log) blocks around parts of the code to get visibility.

What do you think. Does this approach work for you?

Posted in development, web | Tagged , , | Leave a comment

Is the web getting less webby and will serverless make it worse?

[Title inspired by a quote from Scott Hanselman on Serverless with Azure Functions ‘It’s as close to “cloudy” as The Cloud can get’]

There’s some big changes happening in web land that are fuelled by rapid app framework developments and advances in cloud land. In particular the system architectural client-server split is shifting. We currently see these architectures (and variations):

  • Classic 3 tier with web server passing presentation to the web browser as linked pages of HTML etc
  • Ajax-ified with some presentation elements being dynamically requested and updated by the browser or even generated client-side from received data.
  • Single Page Applications where presentation and navigation are completely generated in the browser which directly accesses various 1st and 3rd party RESTful APIs (SaaS).
  • The Opera Mini browser Opera Mini is very popular, especially in poorer countries. It is something of  an architectural oddity as it renders  on a display server and uses a thin-client style display protocol to the client app for data efficiency. This is effectively a final stage applied after the others in this list.
  • GraphQL is becoming popular for API queries as it lets the client dictate the payload and simplifies queries that would require multiple RESTful API round trips

SPAs bring web apps into parity with Native Mobile Apps (and even some Desktop apps). The main difference now being the specific client side SDKs used to bind to the messaging protocols such as REST over HTTP.

On the server we see hosting being outsourced at progressively higher levels in the stack: the so called cloudy IaaS, PaaS, BaaS, FaaS and WaaS. In addition, microservices are being used to break up monolithic middle tiers. In the last year we have seen the rise of interest in so called Serverless (BaaS, FaaS and WaaS). This was initiated by the introduction of AWS Lambda, quickly followed by other providers including Azure Functions, Google Cloud Functions, OpenWhisk.

cute pompom spider

So, exiting times for developers! But are these architectural changes eroding core Webby principals, especially the very carefully developed inclusive design principles? What is the impact on web users?

After 25 years the classic web architecture has, with the help of Web standards, become available to almost everyone regardless of their device capabilities or their accessibility needs. Well, that’s in theory. The reality is completely dependent on developers being aware of best practices and prioritising them. HTML presentation elements are rendered by a wide range of browsers on varied devices including desktop with mouse and keyboard input, portable touch enabled devices with sizes from watches to tablets, and even hybrids such as 2 in ones. In addition to device variability, the Web standards and best practices support human and context variability through carefully baked in accessibility.

The move to micro (and nano FaaS) service architectures on the back end should have limited impact on this webbiness as they are internal details of the servers.  However, the protocols used between client and server are RESTful in the web world, or rather, RESTful communications are the lifeblood of the web. Newer developments like GraphQL start to move away from the web’s RESTful architecture by effectively using one part of HTTP as a transport (somewhat like SOAP). However, this largely a detail of interest to developers only as far as most web users are concerned.

On the face of it the use of client-side generated presentation with AJAX or a SPA should make no difference to webbiness either. True, dynamic creation of the UI is open to developers playing fast and loose with the standards. And accessibility is often being the first casualty. But this is just as possible when content is generated on the server.

A big difference between SPAs and HTML apps is that browser developers put enormous effort into ensuring bad HTML and CSS fail gracefully across supported devices. JavaScript, on the other hand, is NOT fail safe. An error means it crashes and the user probably gets a nasty surprise. Individual developers or client side JavaScript framework developers have to effectively duplicate the effort that browser vendors go to in order to get as rugged an UX. Thus the user experience may not be as  consistent or as accessible with a SPA.

Another issue is that developers want to use the latest and greatest browser features, often in order to give a great UX. For example Service Workers allow developers to provide a great offline experience. As the rate of change accelerates the chances of a user having an old browser that doesn’t support a shiny new feature increases. This is much more exaggerated with features accessed through JavaScript code compared to HTML as the speed and focus is currently there. Even the JavaScript language itself is rapidly evolving witch new features developers are keen to use. So, unless there’s careful design to work with a range of devices users may be left stranded.

An established technique to avoid these problems is Progressive Enhancement where a basic HTML experience is available and UX enhancements are layered on for users with browsers that support the latest CSS and JavaScript shininess. But, with SPAs there is no initial HTML rendering for less able browsers. Lately techniques such as serverside rendering and Universal (isomorphic) JavaScript restore this to a large extent. Interestingly, the drive for these techniques has been SEO (Google can’t spider a client side app)  and time to initial display content of, rather than PE concerns.

In summary, then, the architectural shifts we are seeing do provide new ways to break the carefully designed universal nature of the web and exclude users of some devices or with some accessibility needs. However by carefully following inclusive design thinking at the system level these can be minimised. The shape of web app architectures may be changing but we can ensure the core principles remain in our minds as we develop.

 

Posted in development, HTML, web a11y, web apps | Tagged , | Leave a comment

AWS, Azure or Firebase for a SPA browser app? Nope, it’s Kinvey

[Update 2016/06/07:  I eventually found Azure to be lacking, though this did lead me to explore the excellent Auth0 for authentication. To be honest all three offerings are currently pretty much a bunch of ‘beta bits’, an apt phrase coined by  by Michael Facemire and Jeffrey S. Hammond  in their “Forrester Wave™: Mobile Infrastructure Services, Q3 2015“. I’m now exploring Kinvey, one of the services mentioned in that report. Kinvey are certainly responsive and tick most of the boxes. I just hope they can deliver as I’m seriously behind schedule]

[Update 2016/06/08: After reviewing Kinvey and chatting with them I’ve decided it supplies just about all I need.  I’ve added them in to the comparison below for future reference. The one missing feature is a full CLI to enable scripted “from clean” setup and so CI / CD, but then I didn’t think of that when doing the original post. Another point I forgot was encryption of data in the client storage which they also have covered.]

My current work is the Brian project for people with cognitive disabilities. This open source development is funded by the Prosperity For All EU FP7 project (part of GPII initiative). The plan is for it to become a self financing a service based on Gregg Vanderheiden’s Easy One Communicator and features from MAAVIS.

After much thought about implementing Progressive Enhancement for a proper ‘web app’  verses usage scenarios requiring offline access I decided to start off making a so-called Single Page App (SPA) or ‘browser app’ as I prefer to call them. SPAs require javascript for their functionality and tend to take advantage of all the latest features in the evergreen browsers. They treat the browser as a platform and are designed as part of a system architecture with custom client-server splits. This usually means consuming REST or other APIs directly in the client in order to access a broad range of services (often micro service based). These services may be part of the architecture being developed or from the many 3rd party offerings that add value through aggregation (or mashup).

Those services that are part of the new system being developed might be implemented as a HTTP server, either self hosted or in the cloud (so called IaaS). However, these days it is possible to go ‘serverless’ by using so called Backend as a Service (BaaS) cloud offerings for ‘mobile’ apps (MBaaS). These go a step beyond Platform as a Service (PaaS) which lets you concentrate on you server software at the top of your backend stack.  They also added features that are critical for mobile situations; eg offline data sync, user authentication. I decided to go this route as I really did not want to get involved in DevOps or SysOps or whatever you want to call service configuration, maintenance and security. Another advantage of BaaS is you can easily scale up the backend should your service ‘go large’; you just need to pay more.

The three main PaaS offerings that include some BaaS are Amazon Web Services (AWS), Microsoft Azure and Google Firebase. My initial thoughts were these big operators would have the best dev experience.

  • Amazon AWS has been around the longest are easily the most popular IaaS. Not bad for a spin-off from Amazon’s own in house services. I discovered them via the excellent Serverless project early in my investigations.
  • Azure has steadily added features since the early IaaS only offering days and with the recent introduction of Apps and Functions looks like a pretty reasonable BaaS. It has a strong enterprise positioning.
  • Firebase was until very recently quite limited. They just rounded out the PaaS offering but adding authentication, storage and other features. Firebase is strong on metrics and pushing ads to users (now surprise there as is now Google / Alphabet). They often mention the most up-to-date requirements of SPA developers using modern JS practices.
  • Kinvey is about 4 years old and started supporting indie developers (like me) but recently pivoted to be more enterprise focussed. Their founder and CEO Sravish Sridhar claims to have invented the term BaaS and prove the model works. They have a rounded provision and fully support HTML5 and javascript in the browser.

As an aside, the Serverless project simplifies the configuration of your backend. This is  especially important in a team environment. Though serverless is tied to parts of AWS, Azure may come.

I tried AWS first. However after writing some client code and hitting many problems and confusions I finally decided enough was enough and I should look at others. The next big sticking point was username only sign in, most Brian users will not have email addresses. The following is a brief summary of my findings and thoughts based on my requirements.

Disclaimer: this review is a result of reading around the subject, with the exception of AWS I have not tried working code, yet. I also looked at the free tiers but with an eye on the expansion options

Clear docs and examples for JavaScript mobile web app client

  • AWS: Quite a mess. iOS and Android are first class but javascript poorly linked and rather hard to find. To be fair, some components are beta. The main problem is lots bits and no clear complete examples
  • Azure: Yes but slightly confusing messaging and so many features that are not relevant to browser apps. “Web Apps” focusses on the back end services while a “Mobile App” add an offline-capable OData v3 feed for data and client SDKs including JS. Apache Cordova (hybrid apps) is often mentioned whenever JS is but apart from a few dependencies on Cordova plugins, browsers are equally well supported.  An excellent series of posts from the from the Apps project lead
  • Firebase: Excellent getting oriented and get started docs with good complete examples. Clean SDKs. All really easy to find.
  • Kinvey: Hard to fault really: case studies,  developer guides, references, samples and code all easy to find and digest. No bloat or hype, just good information. They even have some whitepapers and ebooks introducing the wider topics and comparisions with other providors.

All the JS client SDKs are open source projects

Support

You get what you pay for with support so I just tried pre sales via twitter and other free channels

  • AWS: I tried issues on the client SDK projects with limit success
  • Azure I got excellent direct help from the project lead.
  • Google: Use Stack Overflow – no response yet
  • Kinvey: Pretty much perfect. Initial Twitter contact from the CEO when I happened to mention Kinvey. Swiftly followed by voice discussion with JS lead. Forum is also active.

Static hosting for SPAs

SPAs only need static hosting for the HTML, JS, CSS and other assets, While developing you don’t want caches to get in the way. In production you want CDNs to give global fast access. HTTPS is a must have as is URL rewriting as SPA use pushState to simulate URLs and we need to stop the server throwing 404s

  • AWS: S3 doesn’t provide HTTPS, cloud front does but only suitable for deployment
  • Azure: All covered, Blob storage looks best or possibly web apps
  • Firebase: all covered
  • Kinvey: No. Perhaps the only missing feature

There’s always GitPages, Surge and other services for cheap static hosting.

Simple sign-in with Username and Password

Brian users are elderly in residential environments and are most unlikely to have either email, SMS or accounts with other social services. Thus the commonplace and more secure authentication flows that require email or SMS cannot be used. Even a password may be too much for a person living with dementia.

  • AWS: Explicit with Cognito User Pools, a new beta service.
  • Azure: fairly straight forward example given using Auth0 a separate service
  • Firebase: requires a custom Auth flow and identity server – perhaps works with Auth0
  • Kinvey: Yes. Plus options for enterprise and social federated

Offline data sync for use config

This means no need to use REST APIs for data access. Just read/write locally and the system takes care of details as and when connectivity is available. Further, sync supports updates between devices. Should also optimise battery use and metered connection costs. I’m not particularly bothered if data is JSON, pairs or SQL.

  • AWS: supposedly easy using Cognito Sync. I hit problems which basic set/get transaction which caused me to look at others.
  • Azure: not yet but in progress and a fork on GitHub project. Will initially be for Cordova apps only.
  • Firebase: yes. They mention all the important points.
  • Kinvey: Yes. Again covers the bases plus works with all browser storage options. Very clean flexible SDK based on RxJS observables and providing a Fluid style API for queries. This is a big bonus for Brian which uses RxJS CycleJS

Storage for media files and URL access

Brian needs to display local images but the FileAPI URLs used to access local content are temporary. This almost certainly security related. Thus we unfortunately need to upload local files, store them an access with a private URL. Alternatively we create a Hybrid app to circumvent the sandboxing, but then we have to play the App Store dance and Ive no desire to do that.

  • AWS: S3 and the Generate Web URL API for public and signed URLs
  • Azure: Blob storage with public and private URLs
  • Firebase: just released. Not clear how to get URL as operations seem to be upload / download only.
  • Kinvey: yes, delegates to Google cloud storage. Still not clear how to provide private URLs

Server side code execution, AKA business logic

Rather than setting up a full server it should be possible to run ‘snippits’ Obviously security, authentication and integration with other parts are all important.

  • AWS: yes – Lambda
  • Azure: yes functions – still beta and not yet fully integrated with Apps
  • Firebase: no
  • Kinvey: Yes. Restricted node environment on free tier, full node on paid tiers.

CLI to make config easy to manage and reproduce

All the systems have snazzy interactive web GUIs but as I discovered with SQL Server maintenance you *really* need to script you configuration to make it reproducible and to easily make bulk changes. This is especially important for CI /CD and allowing anyone to easily set up an open source project from scratch

  • AWS: yes
  • Azure: yes and not just Windows either
  • Firebase yes.
  • Kinvey: Only for business logic on free. Requires manual management and deployment of some config. Offers image cloning on other tiers

Features for metering, crash support

  • AWS: yes
  • Azure: yes
  • Firebase: naturally strong given Googles business model
  • Kinvey: only with enterprise tier

Realtime messaging, data and push notifications

I’m not bothered by this right now but it might be useful. I didn’t spend any time looking at this

  • AWS: push notifications
  • Azure:  push notifications
  • Firebase:  make a big thing of push etc.
  • Kinvey: push notifications but not for JS clients (yet)

Collaboration opportunities

I’m developing the client in CycleJS and RxJS (switching  soon to xstream) and the small but growing community includes those working on another SPA using Firebase. It would be great to share effort with the sparks project, especially as they really understand the technology a lot more than myself. If I don’t use Firebase it might still be possible to share concepts with them and maybe make a similar driver for the community to use with another provider.

Cost

I left the most important ’till last. Brian is an open source project and I’m collaborating with various EU projects who are performing trials. So as a micro SME I’m very cost sensitive. At least until I get the self financing service going. Thus a free period or credits are vital.

  • AWS: I year free access to almost everything, Looks cheap after.
  • Azure: Somewhat confusing array of subscriptions that can run in parallel. Free month, F1 tier, MSDN and BizSpark. I’ve applied for the latter. Pricing a bit confusing and one page seemed out of date.
  • Google: most parts are free but some appear to always be chargeable.
  • Kinvey: Free developers tier good for all non enterprise .focused features, can have unlimited backends (called Apps).

Conclusion

Part of the reason for this post was for me to collect my thoughts and make a decision. So despite being very late with getting a MVP out the door, I’m going to use Kinvey, rather than spend more time trying to get AWS working or fight with the gaps in Azure’s provision. I’ll take the extended learning curve and present lack of offline sync on the chin. It looks like Kinvey has all my requirements covered except a full CLI so with any luck I can just get on with my app now! I’ll post my experience

Posted in serverless, Uncategorized, web, web apps | Leave a comment

“I don’t care about the OS, just give me my web Apps”

So I tweeted in jest to Bruce Lawson today in a conversation about Progressive Web Apps in the aftermath of the excellent WebProgressions one-day conference.

And then I realised, I actually meant it!

My point is that as a user of tech I want to get at the content or functionality I find useful or interesting when I want to. I want to do so whatever device I have in front of me of me or on me. I’m not interested in arbitrary platform distinctions or fan bouy love affairs. To be honest I find the main desktop OSs are ‘the same but different’. Ditto mobile OSs. And that’s OK. I’d even be happy if devices became commodity infrastructure. But the market isn’t quite like that.

If I quickly want some info I’ll use the web. If I want to do something repeatedly and it’s convenient for me to let the service save info about me for *my* benefit, I’ll use an app. Furthermore,  as I want probably to get access to the same stuff on different devices that really means I want a web app. That’s the closest we’ve so far got to the “write once, run anywhere dream.

Variety is good for choice and drives quality so I’m happy that there are competing browsers and OSs. Just as long as they seamlessly support the features I want. And these days that probably means they use basic features covered by a W3C standard.

Does that mean I want my experience of the web sites and apps I access to be identical whatever? Absolutely not. I want variation that suites

  • My interaction modes and environment. For example, desktop with keyboard and large screen or mobile with touch (but note these personal and technical modes are all blurring)
  • Personalised access according to my preferences and accessibility requirements and environmental constraints (eg driving)

Actually, those 2 are really just different facets of the same thing. Personalised Accessible User Experience or AUX

I don’t want experience based on the suppliers development priorities or convenience. Nor on some marketing wish to push stuff at me for business benefits (especially Ads).  But, it turns out platform does matter as the accessible experiences are not equal.

I want a user-centred AUX whatever the device. No more and no less.

Having started taking Microsoft seriously again I do think they get much of this. Even if they are going to start charging for Windows 10 again. It looks like they are focussing on the cloud and services rather than just the Windows OS. With Edge, they are now engaging with web users and development community in very impressive and meaningful ways. They have made accessibility important at a high level. Continuum and devices like Surface Pro accept our desire to change our interaction modes during the day: and even encourage it.

Just don’t expect them to open source Windows just yet!

Posted in a11y, Apps, web, web apps | Tagged , | Leave a comment

Free and easy HTTPS certificates with CDN with Kloudsec

HTTPS is a ‘must have’ for any web service, SPA or progressive web application and so it is naturally high on my list of things to get to grips with. As a first step for the Brian project I’m creating a SPA (browser client side app) using web assets served up with GitHub pages (basically free hosting). It’s easy enough to set up the static pages and a simple deploy script. If you stick to the GitHub supplied URL (eg http://opendirective.github.io/brianLive/) you get CDN and HTTPS access

However, if  you have a custom domain pointing at your Github Pages (eg  brian.opendirective.net) then a) You lose HTTPS support and b) You lose the CDN functionality if your custom domain is an apex domain (An apex domain is a domain without prefixes, such as example.com, not www.example.com)

The thought of setting up HTTPS certificates used to fill me with dread. After reading around I was very disillusioned by the apparent complex tedious process. Assuming I understood it correctly.

kloudsecRecently however, LetsEncrypt arrived on the scene, soothing stressed web developers with their streamlined process for creating free HTTPS certificates. Still, the process does include installing and operating a local client tool. I decided to wait a bit.

Then Steve Goh (@nubela) of  kloudsec cold called me asking if I’d like to try the new version of their developer CDN service which supports GitHub Pages. I’m pleased he did. This new service provides GitHub custom domains a kloudsec CDN with HTTPS certificate provisioning and various plugins.

As you can see from kloudsec.com/github-pages. It’s a simple 3 step process. If you’ve already setup your GitHub pages then you’ll have done one step already. After registering with Kloudsec and setting up GitHub pages in your repository you’ll need to change your DNS settings.  This only requires adding 2 new records (and A and a TXT for verification of ownership), plus you’ll want to remove any CNAME you may have previously created for GitHub pages set up.

It all goes very smoothly. The website dashboard is clear and you get progress emails. You’ll obviously need to wait an unknown time for DNS propagation but otherwise it’s a simple few click and edits before your GitHub pages are served as HTTPS. You can also turn on a redirect from HTTP to HTTPS as well.

I hit a few rough edges which is not surprising given the Beta statement, but nothing I couldn’t easily resolve. The email and dashboard make it all pretty clear. I’m sure the process will be made even smoother.

In summary, for zero cost except a few minutes work you get a CDN with North American, Europe and Asian access, speed optimisations, HTTPS serving with HTTPS certification and, automatic backup serving of your pages, anti hack features and a clear dashboard of performance. Other paid plugins are/ will be available and I sure the simple one-click install will make them really attractive. You can also download your certificate should you want to use with alternative hosting arrangements.

The Kloudsec service is not just for GitHub pages but works with any domain.

Highly recommended..

Posted in development, devops, opensource | Leave a comment

More on portable npm scripts

Following on from my earlier post on the topic of writing portable npm scripts, here’s a few more useful tips.

[UPDATE 2016/03/31: Bash for Windows was announced at Microsoft Build 2016. This exciting feature will allow running of Linux npm script builds with ease. See Scott Hanselman’s blog post]

[UPDATE: 2016/03/29:  The recently released Docker for Windows Beta might be a good alternative to using a VM. It user Hyper-V.]

[UPDATE: 2016/03/29:  This is a comprehensive article on using npm for build]

Copying files

Use the ncp module to copy files. This goes nicely with mkdirp and rimraf  mentioned before.

Setting environment variables

It’s common to have scripts with a command line of the form

This sets the environment variable NODE_ENV for the duration the command runs. In this case it is use to perform a production build with webpack.

Such syntax works fine in bash etc on Linux / OS X but fails on windows where npm scripts always use CMD. One solution is to use the cross-env npm module which uses a regx to find environment settings (and so is probably not fool-proof). Once installed you just prefix your command like so

Running an extra bash process

I use the Git for Windows bash shell for all my development CLI needs on Windows (It is also installed as part of the GitHub Desktop for Windows). This is a port of the mature MSYS / MinGW port of Linux build environments and works pretty well, though some of the commands are old versions.

On Windows, npm ignores the current shell from which you run it and doesn’t pass the shell on to the sub processes as you would expect. However, you can easily run bash as the main command in a npm script (it’s an extra process but that hardly matters). This works as bash sets the path which is then inherited by the cmd subshell in which npm runs your package.json scripts. As a result it’s easy enough to create portable scripts or convert linux based scripts to also run on Windows. You just need to wrap the command in bash -c "...." For example, the above env setting script can be recoded as follows

The only issue i found is the need to carefully quote ” characters. For example here’s a little script to prompt before deploying to GitHub pages (I’m showing the full package.json entry for clarity)

Using a Linux VM

I often use a Linux VM as part of my development. With Vagrent it’s easy to provision a headless VirtualBox (or other) VM that shares the host filespace and exposes a SSH terminal. Thus you can edit using Windows tools like VisualStudio Code yet run everything in the Linux VM. This lets you run local tests in the same VM as a CI or CD system (which will usually be Linux , unless you are using Azure). One easy configuration I’ve used is this Quality Infrastructure from the GPII project.

Posted in development, web, Windows | Tagged , , , , | 2 Comments

Writing portable npm build scripts

tl;dr; Developers need to install and build Javascript NPM modules on Windows as well as *nix. With a little care this is possible without using heavyweights tools like Grunt and Gulp .

Modern HTML development usually includes a build and deploy process similar to those used in compiled development workflows. In this case, assets that end up deployed and accessed by end users are the result of a pipeline of operations such as transpiling, concatenation, minifying and zipping. In addition,  developers use these and others steps when developing, for example as part of test automation,  on check-in or as part of  continuous integration and deployment process. Perhaps somewhat surprisingly, the traditional build tools such as shell scripts, configure and Make (or Ant) are not commonplace. Rather, we often see newer JavaScript based tools like Grunt, Gulp or Broccoli being the “go-to” choice. Critically, these tools do have the advantage of largely working cross-platform on Linux/OS X and Windows.

npm logo

An alternative build option is to use NPM’s scripts feature in the project or module package.json. You can use commands like ‘npm run test’ to invoke important build processes. This has the advantage of putting the scripts in the same place as the rest of your project configuration. Also, actions may be broken up into sub actions or invoked though life-cycle triggers (like “before publish”). Unfortunately though, while NPM tracks module dependencies, these are not used in the scripts to minimize the required build steps (as Make does). Perhaps that will come in time, but until then, either everything gets built every time or you’ll need to call a build tool like Make from scripts. One issue with While is while it is very effective it has a rather gnarly syntax and plenty of awkward features that you need to get to grips with. That said, common useful rules are simply implemented. Another tool, Webpack looks interesting for building as it manages dependencies and also works with modules rather than files, as Make does.

Both Make and NPM scripts simply evoke the native command line shell to perform the actions for each build step and this raises an issue when you want to have your build work across platforms. The problem is that the shells have different syntax and command sets so you have to restrict npm scripts to a least common subset. Fortunately you can manage portability  with care. Evens  so, plenty of published modules exist that assume they are built on a *Nix Bash shell and so break on Windows. You might think you could get away with running one of the Bash shell systems for Windows (eg MSys, cygwin) but NPM always launches a Cmd shell (you can work around this by having your scripts run an extra bash shell, but that’s a bit hacky). More importantly using bash requires target build system configuration with yet another tool. We’d ideally like our build to work with just node (and thus npm) installed.

So assuming we have to write NPM scripts that run on both Bash and Cmd what can you do to reduce problems?

  • Separate commands in a single script with && (“and if no error”) or || (“or if error”) instead of  the terminator (; or : ). Remember you can invoke subscripts with “npm run xxx”
  • Modules like “concurrently” and “npm-run-all” add further task management options
  • Operators && || & < > and | all work pretty much the same in cmd and bash and offer a lot of power
  • Paths are a pain. While Windows system calls support the / separator it is also used for command options. Avoid as much as possible
  • In npm scripts “node_modules/.bin” is on the path so any CLI command modules installed with –save or –save-dev will available to scripts when the package is installed. For example “rimraf”, “mkdirp” and “ncp”. This avoids tell devs to do global installs of tools which may conflict with other tools.

Of course JavaScript itself is an ideal platform independent script tool so you could use nodejs to create build scripts called from your NPM scripts. And after all, that is what Grunt and Gulp do by providing a full on framework for build services. The choice as always, is yours. A useful approach is to use the “Shelljs” module that provides a unix style set of functions as an alternative to using the bash shell directly. In addition “Node-glob” provides wildcard expansions.

As a final thought, modules are usually distributed in source form and some contain native module source that must be compiled using a toolchain of Python and C++. Fortunately this is getting easier on Windows as described in Microsoft’s new nodejs Guidelines for Windows.

Posted in development, Windows | Tagged , , | 3 Comments

Dealing with Windows text line endings in git

Text line endings on Windows: Still painful after all these years

Once upon a time, in the days of Microsoft MS-DOS development one of main pain points and source of bugs was the distinction between text and binary files. When you opened a file you had to say if it was to be accessed in binary or text mode. In text mode the file content was translated when you read or wrote a string of text, meaning you had to know the file contents and use the correct mode. Presumably this was done to keep files smaller as the translation was 2 specific characters in memory mapped to 1 specific character in the file. Fortunately this translation on rad or write issue has mostly completely disappeared now those 2 characters are stored in files by Windows. But the legacy of those 2 pesky characters still causes pain whenever developers share files on multiple platforms such as Windows, Linux and OS X.

A MS-DOS start prompt on screen waiting for the user to type a command

A legacy from MS-DOS days lurks in Windows

Shake your carriage

The characters in question are used to mark the end of each line of text (except if automatic text wrapping occurs). You don’t see them but they’re lurking there waiting to catch you out, especially when sharing files between OSs or when using version control.

These 2 characters are technically the ASCII control characters for Carriage Return (CR) and Newline or Line Feed (LF). Note that control characters are a special group that rather than being printed invoke some sort of action. They hark back to the days of Teletype printers where CR would make the print head scoot back to the start of the line (carriage being the mechanism carrying the print head) and LF would move the print head down a line with out effecting the horizontal position. Thus, whenever a new line needed to be started a CR+ LF pair would be sent to the Teletype.

These character are represented in various ways in text files and programs, in ASCII or UNICODE:

  • CR
    • 0x0D hex
    • “\r” in strings
    • Ctrl M or ^M
  • LF
    • 0x0A hex
    • “\n” in strings
    • Ctrl J or ^J

We’ve kept this ancient legacy so that the end of every text line (newline) is marked by these characters. Actually, that’s not exactly true. Rather, each OS uses a different set of characters and that is the root cause of the problem,

  • Linux uses LF only
  • Windows sticks with CRLF
  • OS X for a while used CR only but now uses LF

As a quick aside, you can discover a file’s line endings by using the “file” command that comes with Linux tools for Windows like “Cygwin” or “Git for Windows“. If any line endings are not LF it will tell you. You can also use editors like venerable notepad++ which also lets you change the line ending format.

Return to the future

Life gets complicated when you need to share text files between these OSs, either directly (eg via network access) or by copying files, perhaps via version control tools.  You can try to perform translation to the native format whenever you copy or have tools that support either end of line. The danger with the later approach is not processing all text files or ending up with files with mixed line endings. Mixed line endings will confuse tools that often only check the start of files to determine line ending format. In either case, you’ll likely to get strange effects in editors such as joined lines or funny characters (eg ^M).

This problem surfaces quite often now with open source development where contributors can be using any tools on any OS. In addition to sharing files via version control, developers sometime access files share files between a VM and the host OS without checking out to each.

So perhaps the best approach is to standardise on a single  format for all your files, namely LF.  Fortunately these days most Windows programs that developers use support the LF only style, whether they are Windows native or ports of Linux tools.  The notable exception is dear old notepad, which still insists on a CRLF pair to end each line (not doubt as it’s just a “souped up” edit control and Windows use CRLF natively).

There are of course still issues and the ubiquitous git version control is one culprit you are almost certain to stumble across.

Make sure you git the right newlines

By default git assumes that your workspace files will use the OS native newline format for all text files. It will also try to auto detect text files. Internally however,  git uses LF only (usually) and translates on Windows during checkin and checkout. This is configured by the “core.eol” and “core.autocrlf” settings which default to “CRLF” and “true” on Git for Windows. These are hardcoded and not set in any of the usual git config files.

On the face of it this is good as you get OS specific end of lines on each platform, but only if you always check out to the operating system you are working on. However, as noted above, developers often share files across OSs  so unless they standardise on a single format they’re likely to hit problems.

If you want to use LF universally for your project you need to configure git appropriately. These days that is pretty easy using gitattributes, usually in a .gitattributes file at the root of your project working tree. This overrides any –global, –system or local config settings thereby ensuring a consistent experience in the project. You might possibly need to specify –local config settings as well as some .gitattribute options fallback to those.

The catch, just as in those MS-DOS days, is that you must not translate anything if the file is not pure text but is some other “binary” format, eg non XML based word processor files. If you translate these files you corrupt them, “simples”. Accordingly, git tries to auto detect text files but you can also explicitly declare which files are to be treated as either text or binary.

Gitting practical

This leads to 2 approaches to using LF everywhere:

  • Tell git to never translate anything
  • Tell git to always convert to LF in your workspace

Never translate

The first option seems safe but you’ll have to ensure all text files you [potentially] wish to share only ever contain LFs. That means making sure editors and other tools never use a CRLF when creating the file or editing lines. Not easy when CRLF is still the native Windows line ending.

Enter EditorConfig to the rescue! This is a standard configuration file supported by many editors and that specifies format options including line endings. Thus, developers get a consistent editing experience and files are created the same way whatever editor or IDE they use. Some editors support EditorConfig directly and others have plugins. For example, the Visual Studio extension supports most options including line endings, but currently the Visual Studio Code extension only supports indent style so is no use here.

The way to stop git translating anything is to use a .gitattributes entry of “* -text“. This simply says nothing should be treated as text. You can always override for specific filename patterns, for example “*.txt eol=lf”.

The other thing you can do is to ensure your development workflow includes a check for  CRLF line endings. For example, you can check all files, including binary, using something like “grep Url $‘\x0D’ *” in “Git for Windows”. This will return 0 if any matches, 1 otherwise.

Always LF

Alternatively, you may want to use the second option of having git translate line endings to LF in your workspace. But, bear in mind it only translates on checkout. Thus any CRLFs will remain in your workspace until you go though a complete checkin/checkout cycle.  Once again you’ll probably want to use EditorConfig to specify LF end of lines for all new writes.

To get CRLFs translated you’ll need to force git to checkout your files over the existing copies as by default it doesn’t want to. Otherwise you can leave your workspace in an strange intermediate state that is different from what anyone will experience when they clone or checkout the code. This could potentially be a source of hard to track bugs (though most unlikely). If you use Continuous Integration in your workflow then any potential problems will be quickly found.

To be fare, git gives a loud warning when you are in state when a checkout will change the line endings. However that error is slightly confusing.

Git warning when line endings are not yet translated.

Git warning when line endings are not yet translated.

Git and editors may also complain about the mixed line endings issue described above.

To configure git for this option use “* eol=lf” in .gitattributes. As this will force all files to be treated as text and so converted on checkin make sure you explicitly mark any binary files with lines like “*.png binary”. If you don’t then you checked in file may be corrupt and you may not notice for some time and be stuck with a hard to fix problem.

Note when you first set this option you’ll probably get a load of warnings and all files will appear to change. See the notes on .gitattributes end-of-line conversion for the steps to overcome this.

Coming soon

A gitattributes option to support “* text=auto eol=lf” has been discussed. This would turn on auto textfile detection and then use LF end of lines for any text files. Currently the “eol=lf” options turns on text handling for all files and so you need to carefully declare all binary files.  That’s good practice any way, as no doubt git could incorrectly detect, but at least it would not be critical. We should push for this option.

By the way, Editor Config should soon support a “end_of_line=native” option that will use whatever line ending makes sense according to the OS. That will play better with the default git behaviour but doesn’t help when files are shared without checkout such as in VMs.

Posted in development, Windows | Tagged | Leave a comment

Working with Windows native code from node.js

[UPDATE 02 Feb 2016: While this post discusses Win32 access, here’s an interesting option for UWP access from JXCore that should eventually work with nodejs when the Microsoft PR for Chakra is merged.]

While the node.js ecosystem provides an amazing number of modules covering almost every imaginable use, sometimes you want to work with existing code created in other languages and tool chains. For example, you may have an existing C++ library or perhaps you want to call operating systems APIs not yet available in npm or elsewhere.

When integrating between different language infrastructures you have a choice of which side of the divide to write the required glue code. Glue that provides data marshalling, function calling and event processing. If you want to access code with a C style calling convention then it relatively easy to add code on the C side as node is itself created in C++. This is easily enough done by  creating C/C++ addons but often involves reams of boilerplate code. However, if you do choose that option then you’re going to want to use a tool like nan to make your life tolerable. As the nan readme explains:

Thanks to the crazy changes in V8 (and some in Node core), keeping native addons compiling happily across versions, particularly 0.10 to 0.12 to 4.0, is a minor nightmare. The goal of this project is to store all logic necessary to develop native Node.js add-ons without having to inspect NODE_MODULE_VERSION and get yourself into a macro-tangle.

If you want to work on the Javascript side of the divide then ref by provides all the facilities you need for marshalling to/from the C world. It does this by extending node’s Buffer class to provide a type system and facilities for:

  • Getting the memory address of a Buffer
  • Checking the endianness of the processor
  • Checking if a Buffer represents the NULL pointer
  • Reading and writing “pointers” with Buffers
  • Reading and writing C Strings (NULL-terminated)
  • Reading and writing JavaScript Object references
  • Reading and writing int64_t and uint64_t values
  • A “type” convention to define the contents of a Buffer

Further related  ref modules support javascript representations of other C/C++ types including arrays, structures and unions.

Building on ref’s facilities is node-fii which provides a foreign function interface (ffi) for loading and calling functions exported by dynamic libraries (dlls on Windows). It is also possible to call functions in the current process, ideal for functions in static libraries.

While this eliminates large amounts of C boilerplate, it does have a significant calling overhead. Accordingly you are unlikely to want to use it for functions called in a tight loop or otherwise time sensitive applications.

Here’s a simple example from the lib-ffi documentation for wrapping libm’s ceil() function which takes a double parameter and returns a double result and also the static atoi() which takes a string and returns an int.

A more complex example can be seen is some code I wrote for the GPII system for automatic personalisation from preferences. This is perhaps a slightly unusual application of Node.js as it runs on a Windows device in order to launch and configure various Windows’ settings and assistive technology programmes.

The actual code provides a function GetDisplayResolution() that calls the Windows API EnumDisplaySettings() which returns into the fairly complex DEVICEMODE structure. Note that the DEVMODE structure includes nested unions of structures and while the ref modules support these I decided to  flattened out the declaration (after testing my assumptions about packing and padding).

Posted in development | Tagged , | 4 Comments