Full Stack JavaScript Engineering

This is Code Fellows' textbook for The Full-Stack JavaScript Development Accelerator.

It's a GitBook project. Read Online

This book's latest published form is available at Full Stack JavaScript Engineering. Installation

npm -g install gitbook clone https://github.com/codefellows/Full-Stack-JavaScript-Engineering.git Usage

Browse locally:

cd Full-Stack-JavaScript-Engineering gitbook serve .

Publishing

Bgitbook build . --output=publicuild the Gitbook and check in the changes to the public folder

gitbook build . --output=public git add public git commit git push heroku master # must be a contributor Contributing

Send us a pull request here on Github.

More info, see the GitBook README Pre-work

These pre-work tasks are optional, but recommended if you have any knowledge gaps. They are here to make sure you have a good foundational knowledge of JavaScript, jQuery, Git, and HTML/CSS Code School

We love CodeSchool.com's interactive courses. They are a great way to level up before the Development Acellorator.

Try jQuery

JavaScript Road Trip, Parts 1 and 2

These cover the basics of JavaScript, you may already know this stuff, especially if you did a JavaScript Code Fellows foundations course. It's o.k. Skip to JS Road Trip part 3 if you already have a good foundation.

Try Git Discover Chrome DevTools Crockford on JS Lecture Series

Watch the first Crockford on JS and any other lectures in the series. The whole series is 8+ hours long, so this will be something that will take some time. Asana

We use the Asana project management / TODO list system to keep track of assignments in class. Watch the Intro Video and any other videos or help documents necessary there.

The only thing that's different about the way we use Asana is that an instructor will check off when you are done with an assignment. You can always comment "DONE" if you need us to review the work. Required State Paperwork

Code Fellows LLC is licensed as a technical training school by the State of Washington and we follow all state laws and regulations. All students are required to fill out two important forms:

1: the demographic survey, and 2: Acknowledge receipt of the course catalog and honor code.

Please print out these forms, fill them out and sign them, and bring them to the first class meeting. Connect to IRC

Internet Relay Chat (IRC) despite being an ancient chat protocol, is highly used in the JavaScript, Node.js, and general web development community. It's important that you're familiar with it and can utilize it for help and to connect with others in the community. We have our own channel to model this community. The channel has two "bots", a logging bot written in Ruby, and a fun bot called "codehue" written in NodeJS.

Connect to the class IRC channel #codefellows on freenode.net and say hello. If you want to be able to talk on the ## channel, you need to register a nickname with Freenode. Type /msg NickServ help to get started from your IRC client. Mac, best free client is LimeChat. Unix, your package manager should have XChat or Pidgin. http://www.irchelp.org/ is a great intro site if you're new to IRC. A short article on irc etiquette: http://geoff.greer.fm/2012/05/19/programmer-irc-etiquette/ Also see https://speakerdeck.com/ivanoats/a-good-question

There is no submission for this assignment. We'll see you on the #codefellows channel in class, and on .IM. They are very useful for sending links to everyone during class, asking questions in the evening, or just socializing.

We also use Gitter.IM as a chatroom just for our class. Click on this button to join. Day One

Computer Setup Questions verify node and npm version Icebreaker.js Core Protocols (Keynote) Go over JS tools mind map Pull Request Workshop

Asana overview and mention want ads assignment

don't check off things add your name as a subtask, use that URL of the subtask Github integration where is your API key? commit message format

Show video and Talk about Agile, intro Along-the-Way project, and share task in Asana

Check in on IRC / Gitter.IM Demo Grunt and File Redirection A List of Foundational JavaScript Tools by Kalina Wu, Ivan Storck, and Sarah Fischer

In our development accelerator, students are introduced to several tools and libraries to expand the abilities of their code. Kalina, one of our former JavaScript students, compiled a list of these tools and wanted to share it with other Code Fellows.

Ivan Storck, our JavaScript Development Accelerator instructor, used Kalina's list to draft up this helpful mind map: General

Scaffolding Tools (for starting projects)

Yeoman - Yeoman is a robust and opinionated client-side stack, comprising tools and frameworks that can help developers quickly build beautiful web applications.

Build Tools (automation)

Grunt.js - The Grunt ecosystem is huge and it's growing every day. With literally hundreds of plugins to choose from, you can use Grunt to automate just about anything with a minimum of effort. Pint.js (Grunt helper) - Pint is a small, asynchronous, dependency-aware wrapper around Grunt, attempting to solve some of the problems that accompany a build process at scale. Gulp.js - Gulp's use of streams and code-over-configuration makes for a simpler and more intuitive build. Browserify.js (for browser) - Browserify is a development tool that allows us to write node.js-style modules that compile for use in the browser. Just like node, we write our modules in separate files, exporting external methods and properties using the module.exports and exports variables. Uglify.js - Uglify.js is a JavaScript parser / mangler / compressor / beautifier library for NodeJS.

Package Management Tools

Homebrew (Mac OS) - Homebrew installs the stuff you need that Apple didn't. Apt (Ubuntu) - The apt-get command is a powerful command-line tool, which works with Ubuntu's Advanced Packaging Tool (APT) performing such functions as installation of new packages, upgrade of existing software packages, updating of the package list index, and even upgrading the entire Ubuntu system. NPM - npm is the official package manager for Node.js. Bower - Bower is a package manager for the web. Front End

MVC Frameworks

Backbone.js - Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, and views with declarative event handling. It connects it all to your existing API over a RESTful JSON interface. Ember.js - Ember makes Handlebars templates even better by ensuring your HTML stays up-to-date when the underlying model changes. To get started, you don't even need to write any JavaScript. .js - AngularJS lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop.

Templates

Handlebars.js - Handlebars provides the power necessary to let you build semantic templates effectively with no frustration. Mustache templates are compatible with Handlebars, so you can take a Mustache template, import it into Handlebars, and start taking advantage of the extra Handlebars features. Mustache.js (less built-out than Handlebars) - Mustache is a simple with implementations available for ActionScript, C++, Clojure, CoffeeScript, ColdFusion, , Erlang, Fantom, Go, , JavaScript, Lua, .NET, Objective-C, Pharo, , PHP, Python, Ruby, Scala and XQuery. Jade - Jade is a node template engine designed primarily for server-side templating in node.js. Haml-js - Haml-js allows the Haml syntax to be used in a JavaScript project. It has most of the same functionality as the original Haml. Eco - Eco lets you embed CoffeeScript logic in your markup.

Testing

Casper.js - CasperJS is a navigation scripting and testing utility for PhantomJS and SlimerJS written in Javascript. Zombie.js - Zombie.js is a lightweight framework for testing client-side JavaScript code in a simulated environment. No browser required. Back End

Servers

Express - Express is a framework for Node. Node - Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications.

Databases

MongoDB - MongoDB is an open-source document database, and the leading NoSQL database. Postgresql - PostgreSQL is a powerful, open source, object-relational database system. SQL - SQL is used to communicate with a database. According to the American National Standards Institute, it is the standard language for relational database management systems.

Architectural Style

RESTful - Representational State Transfer is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system.

Testing

Cucumber.js - Cucumber.js takes the popular behavior-driven development tool and applies it to your JavaScript stack. Jasmine - Jasmine is a behavior-driven development testing framework for JavaScript. It does not rely on browsers, DOM, or any JavaScript framework. Thus it's suited for websites, Node.js projects, or anywhere that JavaScript can run. Mocha - Mocha is a feature-rich JavaScript test framework running on node.js and the browser, making asynchronous testing simple and fun. Q-Unit - Q-Unit is a powerful, easy-to-use JavaScript unit testing framework. It's used by the jQuery, jQuery UI and jQuery Mobile projects and is capable of testing any generic JavaScript code.

Assertion Libraries

Chai - Chai is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework.

Functional Programming Tools

Underscore.js - Underscore is a JavaScript library that provides a whole mess of useful functional programming helpers without extending any built-in objects. Lo-Dash - Lo-Dash is a utility library delivering consistency, customization, and performance.

Update:

Have a tool you think should be on the list? Check out this article and the associated MindNode mind map (OPML) on Github. Submit a pull request and send us your suggestions to add new and popular tools! Pull Request Practice

Create a folder with your name in the class repository.

Send a Pull Request to the class repo, with your own folder.

Inside your folder should be a single .md file that contains some basic info about you. You should include your GitHub username, linked to your GitHub profile. Also link to your Twitter account, your LinkedIn page, and any other relevant information or online presence you'd like to share with your classmates.

I have created an example folder for myself. Computer Setup

Set up your computer with the following tools:

Latest version of Ruby (for Sass, and other tools) Node.js, MongoDB, Redis

Editors: We use Atom.io or Sublime Text 3 in class, and I'm betting you already do too (unless you rock Vim or Emacs). Sublime Text has a fully-featured, unlimited time Trial mode.

Optional: if you are coming from an IDE like Visual Studio or Eclipse, you may like WebStorm (trial version) better than Sublime Text because of the autocompletion and debugging tools. It's also cheaper for an academic license (79)

And if you're a strict proponent of open source, or want to dogwood and customize your editor in JavaScript, there are two great free editors: Brackets and Light Table.

Sign up for these free web services:

GitHub (you may have this already but there is also https://education.github.com/discount_requests/new try it while you're here)

Mac OS:

Homebrew http://brew.sh Note: the instructions are at the end of the web page.

rbenv, ruby-build, ruby 2.1.1 and the sass gem

brew doctor brew update brew install rbenv ruby-build rbenv-gem-rehash 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile echo 'eval "$(rbenv init -)"' >> ~/.bash_profile rbenv install 2.1.1 rbenv global 2.1.1 gem install sass NOT use sudo to install ruby or gems if you get a permissions error when installing sass, somehow system ruby is still active. Try restarting your terminal, or if it persists, check for the items above in your .bash_profile file

Node.js

brew install nvm nvm install 0.10 nvm alias default 0.10 add source $(brew --prefix nvm)/nvm.sh to your .bash_profile or .zshrc Install some commonly used packages with npm: npm -g install grunt- cli Reference the NVM README if you get stuck

Totally Optional, but you may want a relational database. Only do this if you have time. I Totally Optional, but you may want a relational database. Only do this if you have time. I choose PostgreSQL:

Follow Ivan's blog post: https://www.codefellows.org/blogs/how-to-install- postgresql

Pick a 's editor:

Try out http://atom.io and ask around for an invite. or go with the crowd and choose Sublime Text 3: http://www.sublimetext.com/3 Package Control: https://sublime.wbond.net/installation Or, try Adobe's open source http://brackets.io Which one to choose? I like Atom.io and Brackets because you can customize them with JavaScript. Customizing Sublime requires knowledge of Python.

MongoDB

brew install mongodb Follow the directions that homebrew tells you. You can always do brew info mongo you can start mongo with brew services start mongo you can stop redis with brew services stop redis

Redis

brew install redis Follow the directions that homebrew tells you. You can always do brew info redis you can start redis with brew services start redis you can stop redis with brew services stop redis

Heroku Toolbelt

brew install heroku-toolbelt

Ubuntu:

No need for homebrew you already have a perfectly good package management system. In your terminal preferences make sure that "Run Command as a login shell is an enabled profile preferences" check these two screenshots: http://cl.ly/image/220M3f093v2M http://cl.ly/image/3i2O0y0A3e04 rbenv, ruby-build, and ruby: https://www.digitalocean.com/community/articles/how-to- install-ruby-on-rails-on-ubuntu-12-04-lts-with-rbenv--2 NOTE you DO NOT have to buy a digital ocean server, this is instructions for how to install LOCALLY. Ignore the create a server droplet step NOTE replace 1.9.3 with the latest version of ruby: 2.1.1 Another alternative way to install rbenv: https://github.com/sstephenson/rbenv#basic-github-checkout gem install sass // DO NOT use sudo to install gems node.js: compile node from source, following the directions here. Install Grunt-CLI (command line interface) npm -g install grunt-cli PostgreSQL (again, totally optional and only do if you have time) Follow Ivan's blog post: https://www.codefellows.org/blogs/how-to-install- postgresql no need to install the pg gem - you won't be connecting to Postgres from ruby here. Sublime Text 3 http://docs.sublimetext.info/en/latest/getting_started/install.html MongoDB - https://www.digitalocean.com/community/articles/how-to-install-mongodb- on-ubuntu-12-04 NOTE you DO NOT have to buy a digital ocean server, this is instructions for how to install LOCALLY. Ignore the create a server droplet step Redis - https://library.linode.com/databases/redis/ubuntu-12.04-precise-pangolin same as above Heroku Toolbelt - sudo apt-get install heroku-toolbelt Test Out Grunt

By testing to see if grunt works on a project we can see if you have done most of the setup tasks needed.

Let's just make sure your computer is set up with node and npm and can run tests.

1. Clone this Github project. 2. Run the tests, and redirect the output to a text file. the packages we need are Install the npm modules need with npm install 3. Create a subtask with your name 4. Comment on your subtask, with a link to a text file posted on gist.github.com Installing Node from Source

This is only if you need to, generally just on systems.

Goto the node website Click download Click the source download

Go to the directory where you downloaded the source and do:

tar -xzvf cd ./configure --prefix=~/.node make && make install

add the following to your shell startup scripts (.bash_profile, .bashrc or .profile)

export PATH=$PATH:$HOME/.node/bin export NODE_PATH=$HOME/.node/lib/node_modules Making sure it works

Restart your shell and the command node -v should print out the current node version. You should now be able to install Node packages globally without sudo. Try npm -g install jshint. If that works, without any EACCESS errors, you're good! Day Two

Class Structure Instructor Availability in the Mornings Open Questions Demo: Making a branch and pull request to your own repo Node Slides Node Beginner Book NodeSchool.io Async Demo the Node REPL

Use the node REPL - Read, Evaluate, Print Loop Simply type node from the command line. Process.nextTick

var truth_value = false; process.nextTick(function() { console.log(truth_value) }); truth_value = true;

What will the output be? False or True?

The answer is that the output will be true. Why? You might have thought it would be false, right? It's like the statements are having out of order.

It's because we are placing our function with conosole.log on the event queue. Hello Express

Express is a minimalistic built on top of Node.js. Based on Ruby's framework it abstracts away a lot of the boiler plate code required to get a Node up and running. Created by TJ Holowaychuck Express is built using Connect another abstraction for creating web servers with Node. Express 3.x includes a suite of middleware that were abstracted into their own modules with Express 4. Read more about it here

The first step in creating an express application from scratch is to create a new folder with mkdir hello_express. Change into the director with cd hello_express and create a file with the name package.. Inside of the file place the following:

{ "name" : "hello-express", "description" : "a hello world web application written in express", "version" : "0.0.1", "dependencies" : { "express" : "^4.0" } }

A package.json file is found in nearly every Node packag or application. It tells npm about our application. The name and description would appear in npm if we were creatin a node package. The version is the Semantic Versioning version of our application and the dependencies tell npm what packages we need in order to run our application. In this case the only package that we need is express. After saving this file run npm install from the command in our hello_express directory and npm will install Express and all of it's dependencies and save them into a folder called node_modules. Now seems like a perfect time to create a git repository for our application.

git init touch .gitignore echo "node_modules/" >> .gitignore git add . git commit -m "add package.json and .gitignore"

First we need to create a .gitignore file. This file tells git not include our node_modules folder in our version control. This folder can get quite large and we already have our dependencies declared in our package.json file, so it becomes redundant. Now we need to create a simple web server. Create a file called server.js and add the following code:

var express = require('express'); var http = require('http');

var app = express();

app.get('/', function(req, res){ res.send('hello world!'); });

var server = http.createServer(app); server.listen(3000, function(){ console.log('the server is running on port 3000'); });

In this file we first require the express package within our server.js file. We then require http which will be used to create the actual server. Then we create our app by calling the root express function. The app.get line is a REST get request to our root url that simple writes 'hello world!' to the browser. In the final section we create a server and start it listening on port 3000, we pass a callback that gets called when the server is running that simple outputs 'the server is running on port 3000' to the console. To start our server simply run node server.js from the command line. Then point your preferred browser to http://localhost:3000, you should see the text hello world!.

Now this particular server isn't especially useful or interesting but we can modify it to serve static html pages using one of the few optional middlewares that didn't get abstracted out of Express 4, static. Modify your server.js file to look like this:

var express = require('express'); var http = require('http');

var app = express();

app.use(express.static(__dirname + '/public'));

var server = http.createServer(app); server.listen(3000, function() { console.log('the server is listening on port 3000'); });

Our server now serves any file located in the /public directory. The __dirname in this version of the server.js points to the root directory of our application. This is a node global and is available anywhere in a node program. Next we need to create the /public directory, run mkdir public from the console. Now place create an index.html file in the public directory and add the following to it:

Hello World Express Hello World from an html document!

If you close the server we had running and run node server.js again, when you browse to http://localhost:3000 you should see the text Hello World from an html document. You can also serve up anything you place in the public directory, including javascript files, images, css stylesheets and other html files. Don't forget to commit the changes! git add . git commit -m "serving static files" Day Three

On day three, we will cover:

Responsive Web Design Grunt Yeoman 28% of website traffic comes from mobile. Are you prepared? by Elliot Chong and Sarah Fischer

It happens to the best of us.

You're standing in a store, product in front of you, and you wonder what users are saying about it. You pull out your smartphone and do a quick search for the brand.

The site has great functionality and engaging features — for a desktop. You try to check tiny boxes and navigate little menus on your 3-inch screen, cursing your thumbs for not being more nimble and promising yourself that any site you build will be responsive for smartphones.

There is a linear relationship between the number of smartphone users and the need for responsive websites. But creating a website or application that looks good and works well on a desktop, tablet, and smartphone is tricky. Think the solution is to simply create two separate sites with optimized CSS for mobile and desktop users? Think again.

3 problems with using different code bases for different devices

Problem #1: User-Agent Redirects

User-agent redirects detect the user's device and redirects from a desktop URL to one that displays and functions correctly on a mobile device, usually a subdomain at m.example.com.

Problem #2: Two Code Bases

Two. Code. Bases. Assuming that isn't enough reason right there to abandon this duplicate-site notion, consider the additional work and coordination to update both codes.

Problem #3: URL Sharing Between Devices

A user is so impressed with your site or product that they share it on their social network from their phone. Sweet!

But half of their connections click the link and view it on a desktop, and the URL leads them to the mobile version of the site, which ends up looking narrow and broken on their 17-inch MacBook Pro. They're left unimpressed (and even a little put-off) and you're left with potential customers thinking your site isn't user-friendly.

Damn.

What about tablets?

Tablets (and the awkward middle-ground phablets — pick a side, already!) bring yet another size and user experience to consider. Some tablets come with cases that have built-in keyboards, which means they can function like a laptop. But users still want the option to use the touch screen and don't bother with keyboard add-ons.

What's a developer to do? Responsive Web Design

Responsive Web Design (RWD) conditionally modifies the layout of a webpage depending on the width of the device it's being viewed on.

Simple.

Mozilla's resource for web developers puts it oh so nicely:

Media queries, added in deprecated CSS3, let the presentation of content be tailored to a specific range of output devices without having to change the content itself.

In other words, you modify the CSS based on the browser's:

Width / Height

Orientation

Media Type (Screen, TV, Braille, etc.)

Color

Resolution

Aspect-Ratio

Our website is responsive — resizing the browser changes the layout of the text and images.

Using RWD on your website

Applying a grid layout to your website allows it to easily transition from phone to tablet to desktop displays, depending on the user's device. Mobile-tuned JavaScript enhances the user’s experience. Touch-optimized menus, for example, are beautifully simple and easy to use on a smartphone when implemented correctly.

Smart responsive web design is like the stage crew at a theater production — everything is going right when you don't notice it at all. So you want to build a responsive website

Where do you go from here? There are some options for transitioning your website to a responsive layout, if you're ambitious and want to get started with RWD:

1. Foundation This open source product by Zurb is a responsive front-end framework that offers several different HTML templates to choose from. 2. Skeleton This boilerplate for developing mobile-friendly websites gives you a foundation for your website. The bonus for Skeleton is its simple syntax and basic provision of basic styles, which means the look of your site is entirely up to you. 3. HTML and CSS: design and build websites by Jon Duckett If you want to skip the templates and create your own responsive website from scratch, Ch. 15: Layouts offers great instruction on grid layouts. This is also a great place to go if you're just getting started in web development. 4. Beginner's Guide to Grids with Zurb Foundation 5 Try it out—Your mobile visitors will sing your praises (or just appreciate how easy your website is to use on a smartphone. Win-win!). Grunt

Review the slides below: Personal Blog Site Tutorial with Yeoman and Zurb Example Blog Layout and Tutorial

This is a simple blog made as teaching example. Made with:

Yeoman generator-browserify Zurb Foundation Blog Template Other Technologies Used

Grunt NPM Sass (SCSS) SVG (Headshot image of me) svgo (npm install -g svgo) Tutorial

Prerequisites

Node and NPM installed. I recommend brew install nvm on Mac OS X instead of brew install node. See the nvm README for more details. Yeoman and Grunt installed: npm install -g yo grunt-cli

How I made this app

Grab my copy of generator-browserify (until this pull request is closed).

npm -g install ivanoats/generator-browserify

Generate the app skeleton

mkdir blog && cd blog yo browserify

Choose Grunt as the build system. Choose Foundation as the front-end framework. Yes, you'd like to include Modernizr to support your grand-dad on IE8. No, let's skip Jade templating for now. Choose Libsass as the sass compiler so that you don't need a Ruby dependency in your project

You'll see a lot of text scroll by, and on my system I saw the last lines like this:

[email protected] node_modules/grunt-sass ├── [email protected] └── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])

Your directory listing should look something like this:

total 80 drwxr-xr-x 13 ivan staff 442 Apr 17 12:40 . drwxr-xr-x 256 ivan staff 8704 Apr 17 12:36 .. -rw-r--r-- 1 ivan staff 42 Apr 16 15:14 .bowerrc -rw-r--r-- 1 ivan staff 214 Apr 16 15:14 .editorconfig -rw-r--r-- 1 ivan staff 11 Apr 16 15:14 .gitattributes -rw-r--r-- 1 ivan staff 65 Apr 16 15:14 .gitignore -rw-r--r-- 1 ivan staff 390 Apr 16 15:14 .jshintrc -rw-r--r-- 1 ivan staff 11094 Apr 17 12:40 Gruntfile.js drwxr-xr-x 7 ivan staff 238 Apr 17 12:40 app -rw-r--r-- 1 ivan staff 213 Apr 16 15:14 bower.json drwxr-xr-x 2 ivan staff 68 Apr 17 12:40 dist drwxr-xr-x 32 ivan staff 1088 Apr 17 12:40 node_modules -rw-r--r-- 1 ivan staff 1277 Apr 17 12:40 package.json Now type grunt serve to launch the app in a web browser. You should see something like this:

That's great but let's start with a simpler blog layout: Go to http://foundation.zurb.com/templates.html and download the blog layout HTML. Put that in the body tag of app/index.html in your project.

You can now start customizing your blog with the following files:

app/index.html app/scss/app.scss app/images

Here's what I did: Go to town! This generator also includes BackboneJS so you can even make your blog a single- page app. Roadmap for the future for this app

Add a simple MongoDB / Express app as the blog's REST API Build out the site with BackboneJS Add some CasperJS ZombieJS or SuperAgent tests. Contributing

Tested Pull-Requests welcome! I will list you as a contributor. Day Four

Agenda

Acceptance Testing with CasperJS Deploying to Heroku Sass, a CSS pre-processing language Acceptance Testing with CasperJS

Acceptance testing is also known as "Outside-in", or "black-box" testing. It tests a system just like a web browser does. Except, instead of a person clicking on a web browser, a "headless" browser operates in the command line, a bit more behind the scenes.

There are many options for acceptance testing, but we will be using one called CasperJS. Write our first acceptance test

Let's just test to see if the home page is loading o.k., and that the title tag and H1 tags are what we expect. Here's the code, it goes in test/acceptance/home_page_test.js:

BTW, if you want to make a new directory multiple levels deep, you can use: mkdir -p test/acceptance from your project's home directory.

'use strict'; /*global casper*/

casper.test.begin('home page', 3, function suite(test) {

casper.start('http://localhost:3000/', function() { test.assertHttpStatus(200); });

casper.then(function(){ test.assertTitle('Hello World Express', 'title is Hello World Express' });

casper.then(function() { test.assertSelectorHasText('h1','Hello World'); });

casper.run(function(){ test.done(); });

});

So, we have three assertions that we expect to be true. The status should be 200 OK, the title should be "Hello World Express", and the h1 should include the text "Hello World". Run our acceptance tests

To run our acceptance test we'll need to make sure to start the express server. We will use a grunt plugin to automate this.

You can do this on your personal portfolio site, or your hello world express code.

Hook up Grunt-Express-Server

From the command line: npm install grunt-express-server --save-dev

And in Gruntfile.js add: grunt.loadNpmTasks('grunt-express-server');

Install and Configure CasperJS and PhantomJS

Install Casper and PhantomJS globally, and Grunt integration locally

npm install -g phantomjs casperjs npm install grunt-casper --save-dev npm install grunt-express-server --save-dev

Edit your Gruntfile.js to include tasks like these below:

'use strict'; module.exports = function(grunt) {

grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-express-server'); grunt.loadNpmTasks('grunt-casper');

grunt.initConfig({ express: { options: { // Override defaults here }, dev: { options: { script: 'server.js' } }, prod: { options: { script: 'server.js', node_env: 'production' } }, test: { options: { script: 'server.js' } } }, casper: { acceptance : { options : { test : true, }, files : { 'test/acceptance/casper-results.xml' : ['test/acceptance/*_test.js' } } } });

grunt.registerTask('server', [ 'jshint', 'express:dev' ]); grunt.registerTask('test',['express:dev','casper']); grunt.registerTask('default', ['jshint', 'test']);

};

I added a server task that runs the express server after JSHint passes. I added a test task that sets up the express server in dev mode, and then runs the casper tests. I set the default task to run JSHint and then the test task. Try it out

Now try grunt test from the command line and see what happens… Sass

Sass is CSS with Superpowers

Credit is due to Dale Sande for preparing this material.

Sass is an extension of CSS that adds power and elegance to the basic language. It allows you to use variables, nested rules, mixins, inline imports, and more, all with a fully CSS-compatible syntax. Sass helps keep large stylesheets well-organized, and get small stylesheets up and running quickly, particularly with the help of the Compass style library.

Is Sass somewhat of a mystery to you? How does it work? Why do some say that it is better then CSS?

Official documentation: http://sass-lang.com/documentation/file.SASS_REFERENCE.html More info: http://coderecipez.roughdraft.io Node Sass and Grunt

Sass was orignally a Ruby gem, but it is also available as an npm package now. You can npm install node-sass in your projects. Heroku

Let's get our site LIVE ON THE WEB!! This process is called deployment. Slides

Slides from class introducing Heroku. Installation

Make sure you have the Heroku Toolbelt installed.

You can usually brew install heroku-toolbelt or sudo apt-get install heroku-toolbelt. If those don't work you may need to donwload it.

Also, if you haven't already, sign up for an account on Heroku.com. Login

Use heroku login to log in to heroku from the command line.

If you're already logged in, you can use heroku auth:whoami to see who you are logged in as. Create a heroku app

You'll want a nice name for your app instead of the random ones Heroku gives you.

E.g. heroku create ivan-hello-world-express Create the Procfile

You need a file to tell heroku how to launch your app.

Edit Procfile which should be in the root directory of your project. No file extension on this file, and it needs to start with a Capital letter. The procfile is simply:

web: node server.js

This tells heroku that to start your web server, it needs to run the command node server.js Test it out locally with node-foreman

You can use a npm package called foreman to test that your Procfile works as expected. Install this globally. npm install -g foreman

This will give you the nf command. Try it out. nf --help

And, now, try starting your server via foreman. nf start

It should start up your server on port 5000 as a default.

This means that your server should not have any port 'hard-coded' as a default (like 3000). Make sure your server code looks something like this:

var server = http.createServer(app); app.set('port', process.env.PORT || 3000);

server.listen(app.get('port'), function() { console.log('the server is NOW running on port', app.get('port')); }); Commit any changes and push to Heroku

Make sure to commit any changes you made to your app, like adding the Procfile, etc. git add . git commit -m 'preparing for heroku'

Make sure you're on the master branch or that you merge you changes back to master.

And now, to deploy your app to the web on Heroku: git push heroku master

You'll see a bunch of info scroll by from Heroku, but it should look something like this:

$ git push heroku master Fetching repository, done. Counting objects: 7, done. Delta compression using up to 8 threads. Compressing objects: 100% (3/3), done. Writing objects: 100% (4/4), 343 bytes | 0 bytes/s, done. Total 4 (delta 2), reused 0 (delta 0)

-----> Node.js app detected

PRO TIP: Specify a node version in package.json See https://devcenter.heroku.com/articles/nodejs-support

-----> Defaulting to latest stable node: 0.10.28 -----> Downloading and installing node -----> Restoring node_modules directory from cache -----> Pruning cached dependencies not specified in package.json npm WARN package.json hello-express@ No repository field. -----> Writing a custom .npmrc to circumvent npm bugs -----> Exporting config vars to environment -----> Installing dependencies npm WARN package.json hello-express@ No repository field. -----> Caching node_modules directory for future builds -----> Cleaning up node-gyp and npm artifacts -----> Building runtime environment -----> Discovering process types Procfile declares types -> web

-----> Compressing... done, 5.3MB -----> Launching... done, v4 http://ivan-hello-world-express.herokuapp.com/ deployed to Heroku

To [email protected]:ivan-hello-world-express.git 3d47745..3f34feb master -> master

And you can open your browser, and visit your app on the web! Day Five

Fridays are guest speaker days. Each guest speaker varies by availability. Day Six

Monday, welcome back! How were the weekend readings? Agenda

Discuss readings on Modular JavaScript, CommonJS, Code Complexity and Clean Code Browserify Browserify Lab RequireJS Stretch goals: ECMAScript6/Harmony modules Day Six Readings

We are loading you up on best practices readings for the weekend. We will practice using these strategies for the rest of the development accelerator. Rationale

Read Preface and Chapters 1-2 of Testable JavaScript Read Chapter 1 of Clean Code Implementation

Modular JavaScript and CommonJS

Read about the basics of Node’s Require and Exports: http://openmymind.net/2012/2/3/Node-Require-and-Exports/ Dive deeper into exports: http://bites.goodeggs.com/posts/export-this/ Read about CommonJS: http://dailyjs.com/2010/10/18/modules/ Read about modular Javascript, especially the section on CommonJS: http://addyosmani.com/writing-modular-js/ Read NodeJS Require best practices: http://www.mircozeiss.com/node-js-require-s-best- practices/

Now, you're ready to go on to Browserify. Browserify

Use your modules, plus already existing node core modules, in the browser

includes assert, path, url, crypto, domain, events, querystring, util, buffer, etc… bundles up modules into one file, increasing performance What Code is a Good Candidate for Browserify?

Anything you want to use on the server and in the browser.

Validation - ensuring data from the user is in an acceptable format. For example, an email must have an @ sign and a "." Alternatives to Browserify

RequireJS ECMAScript 6 built in modules

Others Automation and Distribution

grunt-contrib-concat For JS can be replaced by Browserify grunt-contrib-copy For HTML, Images, plain CSS grunt-contrib-connect grunt-contrib-watch But wait, what about Bower?

bower install does not modify package.json You could still use Hello World!

The script tag looks a little different than normal because it uses the Require.js convention of placing a link to the require.js library in the src field and a link to the code that utilizes the libaray in the data-main field. Everything else in this file should look familiar.

I'm actually not going to go over any styling because my styling is abysmal and I don't want to embarrass myself any more than necessary. Which means that next up is the bower configuration. I usually just run bower init from the root of the directory and answer the questions. Then create a .bowerrc file with the following:

{ "directory": "app/bower_components" }

This tells bower to install the components in app instead of the root of the project. Next, run bower install requirejs --save and bower install jquery --save. Alright that's all of the initial config and setup, next up is the Gruntfile.

Grunt is a task runner for Javascript that makes the process of development much smoother. It allows the conifuration of tasks much like Rake does for Ruby or make does for C. If you haven't used Grunt before I suggest checking out the docs because I'm not going to go over the basics. Create a Gruntfile.js that looks like this:

module.exports = function(grunt) { grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-contrib-clean'); grunt.loadNpmTasks('grunt-contrib-requirejs'); grunt.initConfig({ pkg: grunt.file.readJSON('package.json'),

clean: { build: ['build/'], dev: { src: ['build/**/*'] } },

copy: { dev: { expand: true, cwd: 'app', src: ['*.css', '*.html', 'bower_components/requirejs/require.js'], dest: 'build/', flatten: false, filter: 'isFile' } },

requirejs: { compile: { options: { name: 'config', baseUrl: 'app/js/', mainConfigFile: 'app/js/config.js', out: 'build/client.js', optimizer: 'none' } } } });

grunt.registerTask('build:dev', ['clean:dev', 'requirejs', 'copy:dev']);

};

When build:dev is called first it removes everthing currently in the build directory. Then it takes all of the files specified in the the app/js/config.js file and "compiles" them into build/client.js. Finally, it copies over our static files including the requirejs library.

The final step is to get some requirejs files into the application. There are going to be two files, app/js/config.js and app/js/main.js. The config.js is the base file and contains logic to load all of the libraries and custom js files for the application. It should look something like this:

require.config({ paths: { "components": "../bower_components", "jquery": "../bower_components/jquery/dist/jquery" } }); require(['main'], function() {console.log('main.js loaded');});

First this file tells require where it can find our bower_comonents and jquery. These files don't need to be copied over with grunt-contrib-copy as they will be included in our client.js file. The require statement at the bottom takes a series of file names(in this case just main.js) and a callback which runs once the module is loaded. The main.js file is where jquery is going to be loaded and used and it should look something like this:

define(['jquery'], function($) { $('body').append('
Hello World from Require.js'); });

The define function takes an array of depencies and a callback that executes when they have all been loaded. Each parameter in the callback contains the modules loaded through the dependencies, in order. For instance, to load Backbone.js with jquery and underscore the define statement would look something this:

define(['jquery','underscore','backbone'], function($, _, Backbone) {});

Of course, the location of those would have to be specified in the config.js as well. That's the basics and the way I managed to get requirejs working for my workflow(yes the alliteration is intentional) with grunt and bower. Day Seven

Unit Testing REST Unit Testing

In , unit testing is a software testing method by which individual units of , sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures are tested to determine if they are fit for use. Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, but could be an individual method. Unit tests are short code fragments created by or occasionally by white box testers during the development process. With Mocha and Chai

Mocha is a feature-rich JavaScript test framework running on node.js and the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases.

Chai is a BDD / TDD assertion library for node and the browser that can be delightfully paired with any javascript testing framework … Chai has several interfaces that allow the developer to choose the most comfortable. The chain-capable BDD styles provide an expressive language & readable style, while the TDD assert style provides a more classical feel

Mocha and Chai (along with Sinon which we will use later) are two of the leading JS testing frameworks. They are frequently used together. An alternative library that includes both features of Mocha, Chai, and Sinon is Jasmine. Testing NodeJS Objects on the Server Side

You can use the [Hello World Express] as a starting point for this tutorial, or any other app you have. cd into your project's root folder.

Install Mocha as a global NPM module: npm install -g mocha We will do this globally to be able to use the mocha command from our command line. We will also install Mocha and Chai as devDependency-ies in our project. npm install mocha chai --save-dev

Make a test directory: mkdir -p test/unit

We will also make a lib directory to hold our "Plain Old JavaScript Objects" (POJSOs)

Here's our simple Post object again:

// lib/post.js var Post = function(title) { return {title: title}; }

module.exports = Post;

Great, we have a Post object constructor that can initialize an instance of a post with a title property.

Let's test to see that the constructor does what we think it will. The returned object should have a title property.

// test/post_test.js var expect = require('chai').expect, Post = require('../lib/post');

describe('Post object tests', function() { var post;

beforeEach(function() { post = new Post('A test post'); });

describe('constructor', function() {

it('post should be truthy (exists)', function() { expect(post).to.be.ok; });

it('post should have title property', function() { expect(post).to.have.property('title'); });

it('post title property matches beforeEach', function() { expect(post.title).to.equal('A test post'); });

}); });

Now we can run the mocha tests with the Mocha command line tool. mocha test/unit should do the trick!

The next step is to use the grunt-simple-mocha grunt plugin to be able to type grunt test and run this, and other, unit tests.

By now you should be familiar with Grunt, and configuring grunt plugins. Try it out. Testing JS Objects on the Client (Browser) Side

We can also run these tests in the browser environment. For this, we will need a 'test harness' HTML file.

Install Mocha and Chai via Bower, too bower install mocha chai

Post tests

and

// test/browser/post_test.js var expect = chai.expect;

describe('Post object tests', function() { var post;

beforeEach(function() { post = new Post('A test post'); });

describe('constructor', function() {

it('post should be truthy (exists)', function() { expect(post).to.be.ok; });

it('post should have title property', function() { expect(post).to.have.property('title'); }); it('post title property matches beforeEach', function() { expect(post.title).to.equal('A test post'); });

}); });

Now open this HTML document in the browser. open test/browser/index.html

Stretch goal: The next step is to use the grunt-mocha grunt plugin to run these browser tests headlessly via PhantomJS.

TODO: This is a work in progress … A Conversation about REST adapted from an original post by Ryan Tomayko

Brother: Hey, I have a question for you… Who is “Roy Fielding”?

ME: Some guy. He's smart.

Brother: Oh? What did he do?

ME: He helped write the first web servers, that sent documents across the Internet… and then he did a ton of research explaining why the web works the way it does. His name is on the specification for the protocol that is used to get pages from servers to your browser.

Brother: How does that work, anyway?

ME: The web?

Brother: Yeah.

ME: Hmm. Well, it's all pretty amazing really. And the funny thing is that it's all very undervalued. The protocol I mentioned, that he helped write, HTTP, it's capable of all sorts of neat stuff that people ignore for some reason.

Brother: You mean “http” like the beginning of what I type into the browser?

ME: Yeah. That first part tells the browser what protocol to use. That stuff you type in there is one of the most important breakthroughs in the history of computing.

Brother: Why?

ME: Because it is capable of describing the location of something anywhere in the world from anywhere in the world. It's the foundation of the web. You can think of it like GPS coordinates for knowledge and information.

Brother: For web pages?

ME: For anything really. That guy, Roy Fielding, he talks a lot about what those things point to in that research I was talking about. The whole is built on an architectural style called “REST”. REST provides a definition of a “resource”, which is what those things point to.

Brother: A web page is a resource?

ME: Kind of. A web page is a “representation” of a resource. Resources are just concepts. URLs-- those things that you type into the browser...

Brother: I know what a URL is..

ME: Oh, right. Those URLs tell the browser that there's a concept somewhere. A browser can then go ask for a specific representation of the concept. Specifically, the browser asks for the web page representation of the concept.

Brother: What other kinds of representations are there? ME: Actually, representations is one of these things that doesn't get used a lot. In most cases, a resource has only a single representation. But we're hoping that representations will be used more in the future because there's a bunch of new formats popping up all over the place.

Brother: Like what?

ME: Hmm. Well, there's this concept that people are calling “Web Services” or "APIs". It means a lot of different things to a lot of different people but the basic concept is that machines could use the web just like people do.

Brother: Is this another robot thing?

ME: No, not really. I don't mean that machines will be sitting down at the desk and browsing the web. But computers can use those same protocols to send messages back and forth to each other. We've been doing that for a long time but none of the techniques we use today work well when you need to be able to talk to all of the machines in the entire world.

Brother: Why not?

ME: Because they weren't designed to be used like that. When Fielding and his buddies started building the web, being able to talk to any machine anywhere in the world was a primary concern. Most of the techniques we use at work to get computers to talk to each other didn't have those requirements. You just needed to talk to a small group of machines.

Brother: And now you need to talk to all the machines?

ME: Yes - and more. We need to be able to talk to all machines about all the stuff that's on all the other machines. So we need some way of having one machine tell another machine about a resource that might be on yet another machine.

Brother: What?

ME: Let's say you're talking to our sister and she wants to borrow Great Grandma's silver water jug or something. But you don't have it - Mom has it. So you tell our sister to get it from Mom instead. This happens all the time in real life and it happens all the time when machines start talking too. On the Internet, it's called a "redirect".

Brother: So how do the machines tell each other where things are?

ME: The URL, of course. If everything that machines need to talk about has a corresponding URL, you've created the machine equivalent of a noun. That you and I and the rest of the world have agreed on talking about nouns in a certain way is pretty important, eh?

Brother: Yeah.

ME: Machines don't have a universal noun - that's why they suck. Every programming language, database, or other kind of system has a different way of talking about nouns. That's why the URL is so important. It let's all of these systems tell each other about each other's nouns.

Brother: But when I'm looking at a web page, I don't think of it like that.

ME: Nobody does. Except Fielding and handful of other people. That's why machines still suck.

Brother: Ha, what about verbs and pronouns and adjectives? ME: Funny you asked because that's another big aspect of REST. Well, verbs are anyway.

Brother: I was just joking.

ME: It was a funny joke! but it's actually not a joke at all. Verbs are important. There's a powerful concept in programming and CS theory called “polymorphism”. That's a geeky way of saying that different nouns can have the same verb applied to them.

Brother: I don't get it.

ME: Well.. Take a look at your coffee table. What are the nouns? Laptop, bottle, book, paper. Now, what are some things you can do to all of these things?

Brother: I don't understand what you mean...

ME: You can "get" them, right? You can pick them up. You can knock them on the floor. You can burn them. You can apply those same exact verbs to any of the objects sitting there.

Brother: Okay... so?

ME: Well, that's important. What if instead of me being able to say to you, "get the bottle," and "get the magazine," and "get the book"; what if instead we needed to come up with different verbs for each of the nouns? I couldn't use the word "get" universally, but instead had to think up a new word for each verb/noun combination. "shmet the bottle", "mandle the magazine", "zorp the book"

Brother: Wow! That's weird.

ME: Yes, it is. Our brains are somehow smart enough to know that the same verbs, like GET, can be applied to many different nouns. Some verbs are more specific than others and apply only to a small set of nouns. For instance, I can't drive a cup and I can't drink a car. But some verbs are almost universal like GET, PUT, and DELETE.

Brother: You can't DELETE a cup.

ME: Well, okay, but you can throw it away. That was another joke, right?

Brother: Yeah.

ME: So anyway, HTTP—this protocol Fielding and his friends created—is all about applying verbs to nouns. For instance, when you go to a web page, the browser does an HTTP GET on the URL you type in and back comes a web page.

Web pages usually have images, right? Those are separate resources. The web page just specifies the URLs to the images and the browser goes and does more GETs using the HTTP protocol on them until all the resources are obtained and the web page is displayed. But the important thing here is that very different kinds of nouns can be treated the same. Whether the noun is an image, text, video, an mp3, a slideshow, whatever. I can GET all of those things the same way given a URL.

Brother: Sounds like GET is a pretty important verb.

ME: It is. Especially when you're using a web browser because browsers pretty much just GET stuff. They don't do a lot of other types of interaction with resources. This is a problem because it has led many people to assume that HTTP is just for GETing. But HTTP is actually a general purpose protocol for applying verbs to nouns. Brother: Cool. But I still don't see how this changes anything. What kinds of nouns and verbs do you want?

ME: Well the nouns are there but not in the right format.

Think about when you're browsing around amazon.com looking for things to buy me for Christmas (whispers: VITAMIX!!!) . Imagine each of the products as being nouns. Now, if they were available in a representation that a machine could understand, you could do a lot of neat things.

Brother: Why can't a machine understand a normal web page?

ME: Because web pages are designed to be understood by people. A machine doesn't care about layout and styling. Machines basically just need the data. Ideally, every URL would have a human readable and a machine readable representation. When a machine GETs the resource, it will ask for the machine readable one. When a browser GETs a resource for a human, it will ask for the human readable one.

Brother: So people would have to make machine formats for all their pages?

ME: If it were valuable.

Look, we've been talking about this with a lot of abstraction. How about we take a real example. Imagine you are a teacher - at school you probably have a big computer system, or three or four computer systems more likely, that would let you manage students: what classes they're in, what grades they're getting, emergency contacts, information about the books you teach out of, etc. If the systems are web-based, then there's probably a URL for each of the nouns involved here: student, teacher, class, book, room, etc. Right now, getting the URL through the browser gives you a web page. If there were a machine readable representation for each URL, then it would be trivial to latch new tools onto the system because all of that information would be consumable in a standard way. It would also make it quite a bit easier for each of the systems to talk to each other. Or, you could build a state or country-wide system that was able to talk to each of the individual school systems to collect testing scores. The possibilities are endless.

Each of the systems would get information from each other using a simple HTTP GET. If one system needs to add something to another system, it would use an HTTP POST. If a system wants to replace something in another system, it uses an HTTP PUT, or to do a partial update, it'll hopefully use PATCH. The only thing left to figure out is what the data should look like.

Brother: So this is what software developers work on now? Deciding what the data should look like?

ME: More or less it is in the web development world, thanks almost entirely to the popularity of RESTful web frameworks like .

But this is a very recent change! Just a few years ago, the large majority of developers were busy writing layers of complex specifications for how to access data in a different way that isn't nearly as useful or eloquent. Nouns weren't universal and verbs weren't polymorphic. They basically ignored throwing out decades of real field usage and proven technique and kept starting over with something that looks a lot like other systems that have failed in the past. They used HTTP but only because it let them talk to our network and security people less. It was like trading simplicity for flashy tools and wizards.

Brother: Ew…Why? ME: I have no idea.

Brother: But we are done with all that?

ME: We are done. Now, we just tell Rails what we want our data to look like, and it takes care of all of the communication pieces for us. It's a huge boost for productivity! Day Eight Mongo, Mongoose and REST Testing with Super Agent

Super Agent is a tool to make REST requests from within Node. It makesthe sending requests as easy as .put or .get, much in the same way that express allows you to simplify the handling of incoming REST requests. While Super Agent is not specifically designed to test JSON apis, it greatly simplifies acceptance testing of said JSON apis. Testing using Super Agent requires both a testing framework, as well as a collection of expect statements. Also being able to run tests from Grunt is key. A combination of Mocha and Chai should fit the bill nicely. First run npm install superagent chai mocha --save then create a test file test/api/notes_api_test.js with the following code:

var superagent = require('superagent'); var chai = require('chai'), expect = chai.expect, should = chai.should(); var app = require('../server.js').app;

describe('Notes JSON api', function() { var id;

//testing the POST function of the JSON API it('can successfully create a new note', function(done) { superagent.post('http://localhost:3000/api/v1/notes/') .send({ body: 'a new note!' }) .end(function(err, res) { expect(err).to.eql(null); expect(res.body._id).to.not.be.eql(null); expect(res.body.body).to.be.eql('a new note!'); id = res.body._id;

done(); }) });

//testing the GET function of the JSON API it('can successfully get a note', function(done) { superagent.get('http://localhost:3000/api/v1/note/' + id) .end(function(err, res) { expect(err).to.eql(null); expect(res.body._id).to.be.eql(id); expect(res.body.body).to.be.eql('a new note!');

done(); }) });

it('can successfully update a note', function(done) { superagent.put('http://localhost:3000/api/v1/note/' + id) .send({ body: 'an updated note' }) .end(function(err, res) { expect(err).to.eql(null); expect(res.body._id).to.be.eql(id); expect(res.body.body).to.be.eql('an updated note');

done(); }) });

it('can successfully delete a note', function(done) { superagent.del('http://localhost:3000/api/v1/note/' + id) .end(function(err, res) { expect(err).to.eql(null);

done(); }); }); });

Some interesting things happen in this code, first when app is required from server.js it actually starts the server before sending JSON requests to it. When each request is sent to the server it returns a callback response that should contain a successful JSON object. In the case of the creation of this object(the POST request) it returns a copy of the object that presumably has been saved to a persistent database. It's integration testing, so while it's not precise it does test the general use case for the JSON api.')})}) Mongoose is an abstraction layer on top of MongoDB. It allows developers to emulate a few relational database constructs while keeping the flexibility of MongoDB. I'm going to go over installing mongodb but the instructions can be found here. The first step to add Mongoose to a project is to add it to the package.json dependencies.

//package.json { "name" : "notes", "description" : "a note taking app" "version" : "0.0.1", "dependencies" : { "express" : "^4.0", "mongoose" : "^3.8" } }

Then run the usual npm install and mongoose is ready to go in an application. Mongoose uses a schema to define what an object should look like and what data it should contain. I like to store my Mongoose schemas in a folder called models from my root directory. The first model I'm going to create is a notes object.

var mongoose = require('mongoose');

var noteSchema = new mongoose.Schema({ body: String });

module.exports = mongoose.model('Note', noteSchema);

The schema for a notes object is pretty simple. For now it only contains one field(the body of the note) which should have a type of String. All of the different field types can be found here. The next step is to tell express where to find the note model in a server.js file and connect to the MongoDb server.

//server.js var express = require('express'); var http = require('http'); var mongoose = require('mongoose');

mongoose.connect('mongodb://localhost/my_awesome_app');

var app = express();

app.set('port', process.env.PORT || 3000);

var server = http.createServer(app); server.listen(app.get('port'), function() { console.log('the server is running on port ' + app.get('port'); });

This server.js file doesn't do much, it connects to the mongodb database running localhost and listen for incomming http requests. When starting a mongo server on my local machine I like to create a db folder in my project directory and start my mongo server with mongod --dbpath ./db but make sure to add db to .gitignore. The next step is create routes that actually handle REST requests. Create a folder called routes and add a file by the name of noteRoutes.js to that folder with the following:

var Note = require('../models/note');

exports.collection = function(req, res) { res.setHeader('Content-Type', 'application/json'); Note.find({}, function(err, notes) { if(err) { res.send(500, {"error": err}); return false; } res.send(notes); }); });

This is the function that gets all of the notes that are saved in the database and the sends them out as json if there are no errors. It uses the mongo find command through mongoose and because nothing is passed to the object in the first argument it returns every document in the collection. Also, note that it's exported as collection, because it send the entire collection. Now the express server.js file needs to updated in order to use this function.

//server.js var express = require('express'); var http = require('http'); var mongoose = require('mongoose');

var noteRoutes = require('./routes/noteRoutes');

mongoose.connect('mongodb://localhost/my_awesome_app');

var app = express();

app.get('/api/v1/notes', noteRoutes.collection);

app.set('port', process.env.PORT || 3000);

var server = http.createServer(app); server.listen(app.get('port'), function() { console.log('the server is running on port ' + app.get('port'); });

It only takes two lines of code to get express talking with our mongoose model. First we require the noteRoutes.js file then when we receive a get request to api/v1/notes we call noteRoutes.collection and it will return all of the objects in the notes collection. To add the rest of the REST routes is pretty simple with the rest of the REST routes the noteRoutes should look like this:

//routes/noteRoutes.js var Note = require('../models/note'); exports.collection = function(req, res) { res.setHeader('Content-Type', 'application/json'); Note.find({}, function(err, note) { if(err) { res.send(500, {"error": err}); return false; } res.send(note); }); }; exports.findById = function(req, res) { res.setHeader('Content-Type', 'application/json'); Note.findOne({"_id" : req.params.id}, function(err, note) { if(err) { res.send(500, {error: err}); return false; } res.send(note); }); }; exports.create = function(req, res) { res.setHeader('Content-Type', 'application/json'); var note = new Note({body: req.body.body}); note.save(function(err, resNote) { if(err) { res.send(500, {error: err}); return false; } res.send(resNote); }); }; exports.update = function(req, res) { res.setHeader('Content-Type', 'application/json'); var id = req.params.id; delete req.body._id;

Note.findOneAndUpdate({'_id' : id}, req.body, function(err, note) if(err) { res.send(500, {error: err}); return false; } res.send(note); }) }; exports.destroy = function(req, res) { res.setHeader('Content-Type', 'application/json'); Note.remove({'_id' : req.params.id}, function(err) { if(err) { res.send(500, {error: err}); return false; } res.send({"message" : "success!"}); }); };

First, since express 4 removed most of the connect middleware that was included with express 3 and the noteRoutes.js needs to parse the body of the incoming request, bodyparser has to be added to the application with npm install body-parser --save. Then, the server.js file should be updated to look like this:

//server.js var express = require('express'); var http = require('http'); var mongoose = require('mongoose'); var bodyparser = require('body-parser');

var noteRoutes = require('./routes/noteRoutes');

mongoose.connect('mongodb://localhost/my_awesome_app');

var app = express();

app.use(bodyparser());

app.get('/api/v1/notes', noteRoutes.collection); app.get('/api/v1/note/:id', noteRoutes.findById); app.post('/api/v1/notes', noteRoutes.create); app.put('/api/v1/note/:id', noteRoutes.update); app.delete('/api/v1/note/:id', noteRoutes.destroy);

app.set('port', process.env.PORT || 3000);

var server = http.createServer(app); server.listen(app.get('port'), function() { console.log('the server is running on port ' + app.get('port'); });

})`})`}) Chapter 9

AJAX AJAX

Now that we've built a REST API, and tested it with our "headless" superagent tests, we can also access the API with JavaScript from a web browser.

AJAX is a term coined by Jesse James Garret in 2005 to describe the technology stack that enables Single Page Applications. It stands for "Asynchronous JavaScript And XML".

Now, most people actually tend to send JSON back and forth, more than XML, but saying "AJAJ" is kind of silly …so we've stuck with AJAX.

The Asynchronous part of the acronym refers to the fact that we can send data to the server from the browser, and thanks to the brower's JavaScript event loop, we can have function executed later when the server returns data. A basic AJAX request in the Browser JavaScript Console

Make sure your notes app is running: grunt serve or node server.js

request = new XMLHttpRequest(); request.open('GET', '/api/v1/notes', true);

request.onload = function() { if (request.status >= 200 && request.status < 400){ // Success! data = JSON.parse(request.responseText); console.log(data); } else { // We reached our target server, but it returned an error console.log("there was an error with the server: " + request.status) } };

request.onerror = function() { // There was a connection error of some sort console.log("There was an error with the request's connection."); };

request.send(); A jQuery AJAX request in the Browser JavaScript Console

Make sure your notes app is running: grunt serve or node server.js

var data = '';

$.ajax({ url: '/api/v1/notes', data: data, success: function(data) { data.forEach(function(element) { $('body').append('

' + element.noteBody + '

'); }); }, dataType: 'json' }); Integrating jQuery, Browserify, and AJAX

You'll need to set up your notes app with Browserify, grunt initConfig, etc. I've chosen client.js as the file the Browserify will bundle my app/js into.

jQuery Browserify Ajax Demo

    // app/js/ajax.js $ = require('jquery');

    var data = '';

    $.ajax({ url: '/api/v1/notes', data: data, success: function(data) { data.forEach(function(element) { $('#notes').append('

  • ' + element.noteBody + '
  • '); }); }, dataType: 'json' }); Guest Speakers

    AWS EC2 Functional Programming Learning Ember via the Ember CLI

    Here is why I choose to use Ember, and the quickest way to get started. Why Ember?

    You're going to either create your own framework or use someone else's. The Ember community has thought about a lot of solutions already, saving you time and money. Documentation: Ember used to have a reputation for poor documentation, but now, it's well know that it's much better, (I'm looking at you, Angular). Start with the Ember Guides and I will give you more resources later. Yehuda Katz. A core contributor to jQuery, Rails, and other high profile projects, he is known for high quality software, and sticking around for a long time to see the project grow. And for the Rails folks in the room, Ember Data and ActiveModel Serializers are a match made for each other. Ember embraces web standards. Google, and Angular, has a reputaiton for making up their own way of doing things. Ember uses Handlebars, ES6 modules, Web Components, etc. You know if another standard comes out that it will be adoped by Ember. Ember CLI. The team is really focused on supporting a quick development workflow through tools like Ember CLI. This cuts out a lot of boilerplate. For me, Ember is a natural extension of Backbone, without all the setup. and wiring shit up. There is a clear seperation of concerns which makes it ideal for a large-scale app. Convention over Configuration decreases the number of design decisions you have to make. Why not Angular?

    I'm not trying to single out Angluar in particular, other than that the class I am preparing this blog post for is presenting both. I don't have any experience with Angular, so these are all quotes below:

    "kills the DOM...various ng-attribute references cluttered the page around and this was mixed with what is called "mustache-esque" template bindings." source

    Angular's creator describes it as a metaframework - a framework for creating your application's framework. Thus, if you get two different Angular apps, their internals will look completely different.

    This is not the approach Ember takes, where you buy in to the framework's conventions. So, one could argue that once you learn the conventions, you'll spend much less time on boilerplate writing a new Ember app than a new Angular app. This doctrine also belongs to Rails, and it's worked out pretty well for them.

    source

    To dive deeper, read A Five Part Blog Post Series Comparing Angular and Ember and Backbone, Angular, or Ember.

    Angular vs Ember slides. Prerequisites

    You'll need the following modules if you don't have them already

    npm install -g phantomjs bower Installation

    First step is to install the command line tool globally:

    npm install -g ember-cli

    Then, install the Ember Chrome Extension. Kicking the Tires

    Examine carefully the output of the help option for the ember command.

    ember --help Our First App Setup

    ember new emberNotes cd emberNotes ember serve

    Take a look (in your editor) at app/templates/application.hbs. Go ahead and change the h2 element to "Welcome to Notes" or something similar. The {{outlet}} tag is where our content will end up. Generating More

    Browse to the List of Ember Generators.

    ember g model note ember g controller notes ember g template note ember g route index

    Edit app/routes/index.js:

    1. Include a model attribute of the route, that points to: 2. A dummy data variable

    import Ember from 'ember';

    export default Ember.Route.extend({ model: function() { return data.result; } });

    var data = { "status": "ok", "result": [ { noteBody: "Twilight Sparkle"}, { noteBody: "Applejack"}, { noteBody: "Fluttershy"}, { noteBody: "Rarity"}, { noteBody: "Pinkie Pie"}, { nodeBody: "Rainbow Dash"} ] };

    And, in app/templates/index.hbs:

    {{#each this}}

  • {{noteBody}}
  • {{/each}}

    Now, let's add images to your data. Add a picture attribute, something like this:

    "result": [ { noteBody: "Twilight Sparkle", picture: "http://img4.wikia.nocookie.net/__cb20140420032412/mlp/images/thumb/e/e0/Twilight_Sparkle_after_drying_herself_S1E03.png/209px-Twilight_Sparkle_after_drying_herself_S1E03.png" }, { noteBody: "Applejack", picture: "http://img3.wikia.nocookie.net/__cb20121029101939/mlp/images/thumb/e/ee/Applejack_proud_of_herself_S1E01.png/209px-Applejack_proud_of_herself_S1E01.png" }, ] and in your index.hbs

    {{#each this}}

  • {{noteBody}}
  • {{/each}}

    Now, with more Ponies! Two-way Data Binding in index.hbs

    {{input type="text" value=name placeholder="Enter your pony name"}}

    Hello, my pony name is: {{name}}, and I think Ember is great!

    More on Ember CLI

    Ember CLI docs Prototyping an Ember App in 20 minutes Top Ten Resources for Staying Up to Date on Ember

    http://emberjs.com/guides/ http://emberwatch.com https://emberflare.com http://www.embercasts.com http://www.confreaks.com/events/emberconf2014 http://emberweekly.com https://www.codeschool.com/courses/warming-up-with-ember-js http://pluralsight.com/training/courses/TableOfContents?courseName=fire-up-emberjs https://courses.tutsplus.com/courses/lets-learn-ember http://voidcanvas.com/emberjs-tutorial-two-way-data-binding/

    And if you're a Rails dev, too:

    https://emberbites.com Authentication with Basic HTTP and JWTs JSON Authentioncation with Passport and Basic HTTP

    Many modern webapplications have an architecture that involves a client side Javascript web app talking to serverside persistence layer over JSON. Authentiation over a JSON api is a tricky subject, primarily due to its stateless nature. Although http is technically stateless, the browser is capable of storing cookies that can be checked by a webserver when requests are made to it. But with a JSON api the server does not have direct access to the browser. Each request has no knowledge of any of the previous requests, meaning that each request that needs to be authenticated needs to transmit its credentials with the request. The eventual goal is to have a piece of information that can be easily passed to every request that tells nothing about the user if passed in the clear. Most APIs (such as Twitter or Facebook) use a public/private key that have to be created on their site in order to use their api but asking a user to do this in order to use a website would be ridiculous. The goal is to have a token that will be created by the server and passed back to the client that can then sent with every request that needs to be authenticated. Eventually this well be JSON Web Token that contains an encrypted set of information that allows the server to find the user and all their attributes. However, the server first needs to determine if the should sent the token the client requesting it, which will require a user name and password.

    I have found two easy ways to implement this with passport, HTTP Basic and Digest. The basic method of authentication sends the username and password over in plain text which is less than ideal but digest request that the password be stored in plain text in the server's database which is even less ideal. Considering how easy it is to use HTTPS using node and frequently servers seem to get broken into my personal preference is to use http basic over a secured connection. During development the server will be using a self signed certificate but in production this would need be replaced with actual ssl cert to avoid scary warnings in the browser. Instructions on how to generate a certificate can be found here.

    The first step to authentication a node server with JSON api is to create a node server with a JSON api. Create a new repository with a package.json file that looks something like this.

    { "name" : "awesome-json-api", "description" : "my super awesome json api", "version" : "0.0.1", "dependencies" : { "express" : "^4.x", "passport" : "^0.2", "passport-http" : "^0.2" "mongoose" : "^3.8", "bcrypt" : "latest", "-simple" : "^0.2", "moment" : "^2.7" } }

    In our package.json file we're including express 4, passport for general passportyness, passport- http which provides the http-basic authentication mongoose for saving users to the database and bcrypt for encrypting the passwords that will be saved in the database. Make sure to run npm install from the root directory as well as generate a self signed ssl cert and key and place them in a folder called config. Now create a server.js file that looks something like this.

    var express = require('express'); var passport = require('passport'); var mongoose = require('mongoose'); var https = require('https'); var fs = require('fs');

    var app = express();

    var options = { key: fs.readFileSync('config/key.pem'): cert: fs.readFileSync('config.cert.pem'): };

    app.get('/', function(req, res) { res.json({'msg' : 'hello world!'}); });

    var server = https.createServer(options, app); server.listen(process.env.PORT || 3000, function() { console.log('server running on port: ' + process.env.PORT || 3000 });

    This server.js file pulls in the self signed certificate and key and creates a hello world https server based on those. The current version of this file also pulls in all of the libraries that will eventually be needed for authentication. The next step in the process is going to be the creation of a User model. This particular model comes primarily from the authentication setup described on the scotch.io site. the method of authentication there is pretty awesome but it doesn't work over a JSON api as it requires both access to the browser through session cookies and uses redirects for success/failure. Create a directory called called models from the project root and add the following User.js file to that directory.

    //models/User.js

    var mongoose = require('mongoose'); var bcrypt = require('bcrypt'); var jwt = reuqire('jwt'); var moment = require('moment');

    var userSchema = mongoose.Schema({ basic: { email: String, password: String } });

    userSchema.methods.generateHash = function(password) { return bcrypt.hashSync(password, bcrypt.genSaltSync(8), null); };

    userSchema.methods.checkHash = function(password) { return bcrypt.compareSync(password, this.basic.password); };

    userSchema.methods.createJWTToken = function(app) { var expires = moment().add('days', 7).valueOf(); var that = this; var token = jwt.encode({ iss: that._id, expires: expires }, app.get('jwtTokenSecret')); return token };

    module.exports = mongoose.model('User', userSchema);

    This user model contains three methods, one that will run an incoming password through a one way hash and one that will check an incoming password against a hash saved in the database. Bcrypt handles all of details of encrypting a password and adding salt through the genSaltSync command. Keep in mind that the higher the number passed into that function the longer it will take to save a user to the database or check if a user's credentials are correct. It's a synchronous function so this is less than ideal. The third method is used to generate a JSON Web Token after a user's credentials have been successfully authenticated, I will go over this function once we get to the JWT portion of this tutorial.

    Now that the user model has been created, passport needs to know how to use it to authenticate requests. I like to keep all of my authentication related js files in lib/authentication/, create both those folders and add the following passportBasic.js file to it.

    //lib/authentication/passportBasic.js var BasicStrategy = require('passport-http').BasicStrategy; var User = require('../../models/User');

    module.exports = function(passport) { passport.use('basic', new BasicStrategy({ usernameField: 'email', passwordField: 'password' }, function(email, passord, done) { User.findOne({'basic.email': 'email'}, function(err, user){ if(err) { return done(err); }

    if(!user) { return done(null, false); }

    if(!user.validPassword(password)) { return done(null, false); }

    return done(null, user); }); })); };

    This file essentially specifies what conditions mark a successful authentication. First we attempt to find the user if there's a error we return the error. If the user doesn't exist we return false for authentication. If the password doesn't authenticate we return false. If the program makes it past those conditions it means it found a valid user and we return the user to passport. Passport knows that if false is returned from this function it should send a 401 unauthorized to the client making the request. Something to keep in mind, this passport definition is only going to be used when a user signs in. When a user is created it won't need to go through an authentication process and every other request should be authenticated with the JWT that will be generated upon a successful sign in. The next step is to create the sign up/sign in routes for the application. Create a routes directory from the root directory and add the following userRoutes.js file to that directory.

    var user = require('../models/userRoutes');

    module.exports = function(app, passport) { app.post('/api/v1/users', function(req, res) { User.findOne({'basic.email': req.body.email}, function(err, user) if(err) { return res.json(500, err); }

    if(user) { return res.json(401, {'msg' : 'email in use'}): }

    var newUser = new User(); newUser.basic.email = req.body.email; newUser.basic.password = newUser.generateHash(req.body.password);

    newUser.save(function(err, resUser) { if(err) { return res.json(500, err); }

    return res.json(resUser): }); }); });

    app.get('api/v1/users', passport.authenticate('basic', {session: return res.json({'jwt': req.user.createToken(app)}); }); };

    The first function creates a user on a post request and saves it to the database if there is no other user with the specified email after hashing the incoming password. With the login route(a get request /api/v1/users) the request will go through the authentication we specified with passport and if successful it will run the function that has been specified in the get route. The last step to hooking up basic authentication is to add it to the server.js file. var express = require('express'); var passport = require('passport'); var mongoose = require('mongoose'); var https = require('https'); var fs = require('fs');

    var app = express();

    app.set('jwtTokenSecret', process.env.JWT_SECRET || 'changemechangemechangeme' require('./lib/authentication/pasportBasic')(passport); require('./routes/userRoutes')(app, passport);

    var options = { key: fs.readFileSync('config/key.pem'): cert: fs.readFileSync('config.cert.pem'): };

    app.get('/', function(req, res) { res.json({'msg' : 'hello world!'}); });

    var server = https.createServer(options, app); server.listen(process.env.PORT || 3000, function() { console.log('server running on port: ' + process.env.PORT || 3000 });

    Those two require lines are all it takes to add the authentication to passport and then add the signup/signin routes to the app. This new server.js also sets the jwtTokenSecret that is used to encrypt the JSON web tokens that the User model generates.

    There is only one more piece to add to this application to be able to authenticate with JSON Web Tokens, the actual middleware that checks if the token/user on the incoming request is valid. Create a jwtAuth.js file in lib/authentication with the following code:

    //lib/authentication/jwtAuth.js

    var User = require('../../models/User'); var jwt = require('jwt-simple');

    module.exports = function(app) { var jwtauth = {};

    jwtauth.auth = function(req, res, next) { var token = req.body.jwt;

    if(!token) { return res.send(401, {'msg': 'no token specified'}); }

    var decoded = jwt.decode(token, app.get('jwtTokenSecret')); User.findOne({'_id': decoded.iss}, function(err, user) { if(err) { return res.send(500, err); }

    if(!user) { return res.send(401); }

    req.user = user; return next(); }); }; };

    In this function we create a jwtauth object with an auth function. This is due to the need to use a function as middleware which request a specific format a specific format but the app needs to passed in to the exported function in order to access the token secret. This function attempts to decode the token if one is specified. After the token is decoded this function attempts to find a user with the specified id and if one exists it calls the next function. If any of these conditions are not met the app sends a 401. This function can be placed within the call chain of a route. For instance to use it in our server.js hello world route we just add it before the function that sends hello world.

    var express = require('express'); var passport = require('passport'); var mongoose = require('mongoose'); var https = require('https'); var fs = require('fs');

    var app = express();

    var jwtauth = require('./lib/authentication/jwtAuth')(app);

    app.set('jwtTokenSecret', process.env.JWT_SECRET || 'changemechangemechangeme' require('./lib/authentication/pasportBasic')(passport); require('./routes/userRoutes')(app, passport);

    var options = { key: fs.readFileSync('config/key.pem'): cert: fs.readFileSync('config.cert.pem'): };

    app.get('/', jwtauth.auth, function(req, res) { res.json({'msg' : 'hello world!'}); });

    var server = https.createServer(options, app); server.listen(process.env.PORT || 3000, function() { console.log('server running on port: ' + process.env.PORT || 3000 });

    To use the jwt authentication middleware just require it into the app and place it in the function chain for a route. Getting angular to talk to the basic/jwt authentication scheme described here is fairly simple. It essentially involves two parts: first, getting the JSON Web Token from the signin route and two, adding the jwt response as a browser cookie. Assuming a bower/browserify setup, run the following from the root of the app directory. bower install angular angular-route angular-base64 angular-cookies --save

    The angular package provides the angular base, the angular route provides angular routing, the angular-base64 allows base64 encryption of the basic auth auth(which passport expects) and angular-cookies allows browser cookies to be set.

    This app is oging to assume that all the angular client side code will reside in /app and will be run through browserify into a /dist or /build directory. The app folder will have the following folders: views, js, js/controllers and possibly a bower_components folder as well. All of the controllers and other components will be drawn into a file named app.js in app/js. The browserify 'compiled' file will be called client.js and this will included into an index.html that gets copied over by the grunt build task. The index.js file should look something like this:

    Notes Angular

    The index.html is pretty simple, all it does is load the client.js file and provide a a div for the app and one for the view. The app.js that browserify uses to create the client.js will look like this:

    require('angular/angular'); require('angular-route'); require('angular-cookies'); require('angular-base64');

    var notesApp = angular.module('notesApp', ['ngRoute', 'base64', 'ngCookies'

    require('./controllers/notesController')(notesApp); require('./controllers/usersController')(notesApp);

    notesApp.config(['$routeProvider', function($routeProvider) { $routeProvider .when('/notes', { templateUrl: 'views/notes.html', controller: 'NotesController' }) .when('/signin', { templateUrl: 'views/signin.html', controller: 'SigninController' }) .otherwise({ redirectTo: '/signin' }); }]);

    This code won't actually run as is, the controllers and the view have yet to be added but this is the overall structure of the app. It creates our notesApp object and then passes the notesApp object to the controller files to add the users and notes controllers. Then the /notes and /signin route are added to the notesApp with signin as the default.

    The next step is to create the signin controller and view. First the view which should be located in app/views/signin.html and should look something like this:

    Sign In

    This view is bound to the SigninController controller. This view contains an email field and a password field and a button that when clicked runs the signin method of the controller. Pretty simple as far as views go. Now, it's time to create the SigninController, which will be located at app/controllers/signinController.js and should contain the following code:

    module.exports = function(app) { app.controller('SigninController', function($scope, $http, $base64, $cookies, $location) $scope.signin = function() { $http.defaults.headers.common['Authentication'] = 'Basic ' + $base64.encode($scope.user.email + $http({ method: 'GET', url: '/api/v1/users', }).success(function(data) { $cookies.jwt = data.jwt; $location.path('/notes'); }).error(function(data) { console.log(data): }); } }); }

    This controller really only contains the singin function which has two parts. First the controller sets the authentication header for the request. Of note is that passport basic authentication actually expects the basic auth to be base64 encoded. While this doesn't actually provide a secure means of transportation and isn't a replacement for https, it does prevent the password from being transported in the clear. The next portion of the signin function sends the request to the singin url and on success will set the response jwt to a browser cookie using the $cookie library. After setting the cookie it redirects to the /notes path. Which means, that the next file to create is the notesView.html in app/views/notesView.html

    Notes

    {{note.noteBody}}

    The notes view is simple, all it does is display the note body for each note. The next step is to add the note controller in app/controllers/notesController.js

    module.exports = function(app) { app.controller('NotesController', function($scope, $http, $cookies) $http.defaults.headers.common['jwt'] = $cookies.jwt; $http({ method: 'GET', url: '/api/v1/notes' }).success(function(data) { $scope.notes = data; }).error(function(data) { console.log(data); }); }); }

    This controller once again sets a header but this time it is the JWT that was received after successfully authenticating and saved to a browser cookie. This does assume that the server side api can read the jwt from the headers and not the body of the request. Which should be as simple as changing the line in jwtauth from var token = req.body.jwt_token to var token = (req.body && req.body.jwt_token) || req.headers.jwt and then it should authenticate and send back an array of notes. How to host a NodeJS app on an EC2 Ubuntu Server

    I will take you through the process of setting up your first server on an Amazon Elastic Compute Cloud (EC2) Ubuntu Server. Sign up for Amazon Web Services Free Tier

    Tip: sign up with a new email if your account is older than a year Create a New Key Pair or Upload an SSH Public Key

    Visit aws ssh key pairs I have found it's easier to upload a public key that you've created on your own machine. Visit Github Help if you need help creating your own public/private key pair. Find and launch an AMI

    Google AWS Marketplace Search for Ubuntu I chose this 64 bit image, you should too for this tutorial. click the big yellow continue button accept default options, except: make sure t1-micro is selected in EC2 Instance Type Launch with 1-Click Connect to your EC2 Machine Instance

    Visit your EC2 Dashboard instance state will be 'running' eventually find Public IP column and note address ssh ubuntu@PUBLIC-IP-ADDRESS make an A record on your domain in Route 53 for convenience Install Prerequisites and Common Packages

    The -y option is helpful because apt won't for wait for you to press 'y', it will just install the packages. Very helpful for when you're trying to script this entire process.

    sudo apt-get update && sudo apt-get install -y build-essential g++ tmux Install Node, Build from Source

    curl -O http://nodejs.org/dist/v0.10.29/node-v0.10.29.tar.gz tar -xvzf node-v0.10.29.tar.gz cd node-v0.10.29 ./configure --prefix=/opt/node make sudo mkdir -p /opt/node sudo chown -R ubuntu.ubuntu /opt/node make install

    Add node to your path in ~/.bashrc: echo "export PATH=/opt/node/bin:$PATH" >> ~/.bashrc

    Then reload .bashrc source ~/.bashrc

    Double check to see that node is in your path: which node => should be /opt/node/bin/node

    Now, we need to add node to root's path too. To do this, we will need to use the visudo command to edit the secure path. sudo visudo edit your Defaults secure_path= line, around the thrird line, to look like:

    Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/node/bin"

    The key here is to put the path to node at the end of the secure path.

    Go ahead and save the file. Install the Latest MongoDB

    Follow the directions here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

    To summarize:

    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list sudo apt-get update sudo apt-get install mongodb-org Install the Latest Redis

    Luckily, Chris Lea keeps an up-to-date ubuntu ppa available.

    sudo add-apt-repository ppa:chris-lea/redis-server sudo apt-get update sudo apt-get install redis-server -y Install the Latest Git

    sudo add-apt-repository ppa:git-core/ppa sudo apt-get update sudo apt-get install git -y

    Test MongoDB is running

    mongo show dbs

    ctrl-d to exit

    Test Redis is running

    redis-cli ping --> should see PONG A Neat Trick to Find the External IP

    You can always find the External IP address of your server in the EC2 Dashboard, but I frequently use this shortcut from the command line: curl icanhazip.com Bower ALL THE THINGS

    I mean, install bower and any other global npm packages you use frequently. npm -g install bower grunt-cli Clone Your App and Install NPM and Bower Packages

    I'll use one of our example apps.

    Make sure you're in the ubuntu home directory: /home/ubuntu

    cd git clone https://github.com/codefellows/javascript-b15-notes.git notes cd notes npm install && bower install Launch the server on Port 80

    To launch your app, and bind on any port under 1000, you need to use sudo to escalate to root privelege.

    sudo -i PORT=80 node server.js visit the site http://YOUR-IP-HERE

    This will do in a pinch, but it's not a professional setup. What happens if your server reboots? You want something to re-start the server automatically. Install the Forever NPM Package npm -g install forever. Forever is a simple CLI tool for ensuring that a given script runs continuously.

    Create /etc/init/notes.conf. This is an Ubuntu Upstart script.

    You can always use nano if you are afraid of Vim…

    /etc/init/notes.conf:

    start on startup stop on shutdown

    expect fork

    script PATH=/opt/node/bin:$PATH exec forever start /home/ubuntu/notes/server.js end script

    pre-stop script PATH=/opt/node/bin:$PATH exec forever stop /home/ubuntu/notes/server.js end script

    Then sudo start notes to start the app

    You can use use sudo status notes to see the status of the service. Table of Contents

    Introduction 1 Prework 5 Connect to IRC 5 Day One 5 JavaScript Tools Overview 12 Github Pull Request Practice 12 Computer Setup 18 Make sure grunt works 18 For Linux: Compile Node from Source 22 Day Two 22 Async Demo 25 Hello Express 27 Day Three 27 Responsive Web Design 31 Grunt 35 Personal Blog Site Tutorial with Yeoman and Zurb 36 Day Four 36 Acceptance Testing with CasperJS 45 Sass 50 Heroku 52 Day Five 52 Day Six 61 Readings 63 Browserify 66 Browserify lab 72 Require.js 82 Day Seven 82 Unit Testing 87 REST 93 Day Eight 93 Test With Super Agent 99 Mongo, Mongoose and the REST 101 Day Nine 101 AJAX 106 Day Ten 106 Ember 111 Auth 111 Server Auth 124 Angular Client Auth 130 EC2 133