Category: Blog

Some notes on a basic Capistrano deployment

My notes when starting out with Capistrano.

Capistrano lets you automate deploys. It lets you specify some settings in an organized fashion. It then logs into your server and runs the scripts associated with your settings – whether they are built into Capistrano or custom defined by you. Things like downloading a latest copy of the code, running migrations, cleaning up, rolling back a failed deploy etc can all be done by either specifying a few settings or writing a “task” for Capistrano to run at some point in the deploy.

In our case, we have a simple, static website – a handful of HTML, CSS, and JavaScript files as well as a folder full of images and other static assets. Thus far, we have had to manually upload the files to the server when we change something. Even with the simple setup we have so far, manually uploading to the server can be problematic: I’ve sometimes forgotten to upload all the files. At other times I’ve forgotten to type all the commands on the server to make sure the permissions on the files are correct. If I automated this, then that would be awesome!

A few pre-requisites:

  • Capistrano needs Ruby (the language). Using the Bundler gem is nice to manage your ruby gems, although not mandatory. I use Bundler. If you are using Capistrano in Ruby on Rails then Bundler is kind of a requirement anyway.
  • Capistrano will need some way to get the latest version of your code. In our case, we’ve decided to host a git repository on the same server.
  • For any environment, Capistrano runs all commands as one user. You need to make sure that your local users’s ssh public key is in the authorized_keys file in the remote user’s .ssh directory. This will let capistrano ssh into the server without asking for a password.
  • There’s some support for interactive shell (where the system asks for some data to be entered) but I haven’t explored this. It is simpler to make sure that the permissions on the server are setup correctly. In our case, we’re going to download files from the git repo to a folder, update their permissions so that they belong to the www-data group, and then move them to the appropriate public_html directory. That said, this might be a dumb way to do it when we might be able to deploy code directly to the appropriate directory and then update symlinks to the latest folder. The latter functionality is built into Capistrano.

Some things to keep in mind when writing rake tasks:

  • Make sure the file ends with a .rake extension or Capistrano / Rake will ignore it.
  • The “desc” statement is just a statement, even though it is associated with whatever comes after it. No “end” statement is associated with it.

That is all for now. More to come if I think of it.

 

This one (weird trick) meeting will energize, organize and direct!

My first manager showed me his notebook. He said, “I write the date on top of each page. I note to-do items and questions as they come up and then forget about them. When I need to look for something to do – I refer my notebook.”. Not having to remember everything reduces stress and allows us to focus on one task at a time. Then, when we’re done, we can open the notebook to see what else needs to be done. If we’re not sure what to pick, we can ask our manager for guidance.

What if you are the manager? There may not be anyone to ask for guidance. This happened to me – a few years after learning about the notebook, I was hired to manage a startup whose CEO was busy raising the next round.

Context is everything.

I started with my notebook, adding questions and obvious to-do items: organize the product, look for a new office, plan work for the developers etc. Then after finding answers to most of my questions, when I started on the todo list, I realized that I lacked intuition. I kept having to ask the CEO for guidance on everything: “okay, what kind of an office do we need?”, “what sort of features do we need to build?”, “how should I organize the product’s features in our project management system?”. I just didn’t have a feel for things, which translated to a lack of confidence. Things got worse after a weekend off. On Monday morning, the previous week felt like a blur – lots of busy work but nothing holding it all together. I didn’t have context.

“You don’t know where you’re going until you know where you’ve been”.

I opened my notebook and on a new page, wrote down what I knew of the company’s current state of affairs and divided it by department (marketing, finance, sales, product, operations etc.).
Then, I wrote down whatever we had done during the previous week for each department. Some items were complete while others needed to be checked on for status. Writing this down gave me the context that I needed to think about what the next steps should be. I immediately felt more comfortable in my role. After a couple of weeks, the CEO took notice of my exercise and asked to participate. We started a shared document online and spent 15 minutes every Monday morning going over the events of the previous week and what needed to get done next.

I call this practice The Monday Morning Kickoff. Here’s how it works:

In a shared document or whiteboard, fill out two sections: “What happened last week?” and “What are we doing this week?”. Each section is broken down further by all the functions / projects that the team is responsible for. All team members are allowed to contribute to all parts of the document.

For executive teams the subsections would be high level (marketing, sales, product-development, operations, finance and accounting etc). For other teams the subsections may be projects or team responsibilities such as outreach, bugs, features, tech-debt, misc, etc.

In the “what happened last week” section, refrain from assigning blame / credit for any of the events. The team gets the credit and takes the blame – whats done is done. This exercise is all about getting in gear for the week ahead.

The “what are we going to do this week” should be filled in collaboratively as well. Individuals can be given assignment if needed, especially for quick reference during the week, but the canonical place for managing work and assignment should be whatever ticketing system is being used by the team (Jira, Trello, Asana etc.). The items in this section can be in response to the “what happened” section or a result of already planned tasks.

As you add weekly entries into this document, it may very well turn into a historical reference. But never forget – the point of this exercise is to create continuity – to give context to you and your team – to help you answer “Why are we doing this? Why now?”.

Why should I do this on Monday morning, and not on Friday or Tuesday?
  • Think of this as the kickoff meeting for the week. If it were done on any other day, it would lose it’s value as a kickoff meeting.
  • The start of the week is also when people suffer from memory loss the most.
  • It gets the team together and allows them to focus on the task forward.
What if I broke this up into two meetings? One on Friday to look back at the week and one on Monday to look forward?
  • I haven’t done this yet, but I can see it potentially working.
  • Of course, you’d have to review what was covered on Friday again on Monday.
  • You’re also likely to have more team members unavailable for a Friday afternoon / evening meeting.
Can we make this more efficient by assigning a note-taker and have everyone speak their contribution out loud?
  • In my experience, while this may seem like an optimization, it has two negative effects:
  • Team members only pay attention up until the point when it is their turn to speak.
  • This switches from being a collaborative activity – on that energizes and gets everyone to participate to a reporting activity. In competitive environments, team members often conflate events. It stops being one where the team talks about the team’s events from the previous week to individual achievement.

How to switch to your project’s version of node when you cd into it.

If you’re using nvm to manage multiple versions of node.js, then you’ll want to automatically switch to the appropriate version of node for that project when you switch (cd) into it’s directory. The simplest, easiest, and most elegant way I’ve found so far is by adding the following lines into your .bash_profile or .bashrc:


# Method to check for existence of .nvmrc
# and switch versions if needed.
load-nvmrc() {
if [[ -f .nvmrc && -r .nvmrc ]]; then
nvm use
fi
}

# Override the cd command to load nvmrc
# whenever someone uses cd to switch into
# a directory
function cd () {
builtin cd "$@" && load-nvmrc;
}

Essentially we’re overriding the cd command to do some extra lifting. After adding the above code, you’ll want to reload / restart your terminal. Also, you’ll want to add an .nvmrc file to each project that uses a specific version of node. In the .nvmrc file, simply specify the version of node you’d like nvm to load when you cd into that project.

One of my project’s .nvmrc files looks like this:

4.4.5

How to fix Webpack when it can’t find your modules

If you are having problems with Webpack that produce the following errors:

Module not found: Error: Cannot resolve module 'some_module_name' in 'path/to/your_file'

Then, here are the steps to fixing it:
First, run Webpack with the ‘–display-error-details’ like so:
webpack --progress --color --watch --display-error-details
In my case, webpack is automatically run when I run npm run dev, so I had to update the webpack command in my package.json file with ‘–display-error-details’.

Now when you run Webpack, you’ll get a far clearer idea of what is actually wrong. Here are some possible issues / fixes:

  • There may be a typo in your webpack.config – make sure everything looks right. Check it twice.
  • Your webpack.config could be missing a key detail / config. This was ultimately what was causing the error for me (details below).
  • If the module that Webpack is unable to find was written by you, make sure that the file names don’t have any trailing spaces. For example, it is very difficult to see the difference between ‘myfile.js’ and ‘myfile.js ‘. The latter has a space after it.
  • If Webpack is unable to find a third-party module such as react or redux, make sure that it is actually installed. Run npm install --save missing_module_name (replace ‘your_module_name with the actual module’s name) just to be sure.
  • As a last ditch Hare Krishna! / Hail Mary! / Hail Pasta!, you can try cleaning out and reinstalling your modules: npm cache clean && rm -rf ./node_modules && npm install.

My specific error and how I fixed it:

Whenever I ran npm run dev, I would get the following error trail:

ERROR in ./front/client/front_desk.jsx
Module not found: Error: Cannot resolve module 'react' in ...path_to/front_desk.jsx

but that wasn’t it, I was getting this for all 3rd party modules I was importing from even though they were installed correctly.

Finally, after some digging, I found that we could get more verbose errors out of Webpack by adding the –display-error-details flag. What’s awesome about this flag is that Webpack will list all the paths it used to find the missing module. In my case these were:

[../node_modules/react.js]
[../node_modules/react.jsx]
[../node_modules/react.js]
[../node_modules/react.jsx]
[../node_modules/react/index.js]
[../node_modules/react/index.jsx]
[../node_modules/react/react.js.js]
[../node_modules/react/react.js.jsx]

So, there was something wrong with how I was telling Webpack to resolve file extensions. I was using:
resolve: {
extensions: ['.js', '.jsx'],
},

when it should have been
resolve: {
extensions: ['', '.js', '.jsx'],
},

which, fixed the problem.

Firefox vs. Chrome SDKs

I wrote an extension for Firefox and Chrome. Here’s what happened:

TL;DR; The Chrome SDK felt simpler with clearer documentation. The Firefox’s developer community is incredibly helpful and their review process will make you a better programmer. You can’t go wrong with either one – but the Chrome SDK will get you there faster.

First Impressions:
I wrote the Firefox extension first because I use Firefox. I downloaded the SDK and followed the introductory tutorial. After the intro, there are some good tutorials on how to structure the extension and how to do specific tasks like detecting a webpage load. I then learned that there are high-level API calls and low-level API calls.

Feasibility Study:
While the high-level APIs are well-documented, many low-level APIs are marked as “unstable” or “deprecated”. This was unnerving when I found out that I needed to use an unstable API to achieve core functionality for my extension.

Eventually, with sufficient searching, asking questions on the Mozilla Developer IRC channels, and testing out code examples, I was able to get the basics working. There are some incredibly kind and gracious developers in the Mozilla developer network. I would not have gotten very far without their help.

Basic Functionality:
I needed to let users manage a list of websites, and update the plugin’s behavior whenever the list was saved. In Firefox, content modules don’t have direct access to storage. This means that instead of just saving the list whenever the user presses the Save button, we have to write some message-passing code to pass the list to the main module, which then saves it.

Plugin Options:
Firefox lets us create Preferences for extensions, which can be accessed by opening up the browser’s Add-ons window (Tools; Add-ons). The way to do this by specifying the preferences, their data-types, as well as some other basic info in the extensions’ manifest file. Then, in the extension’s main module, we can handle changes to the preferences through a listener. This means that Firefox stores and treats preferences separately from other data stored by the extension.

In my case, changing preferences also affected the content module, so this meant writing some more message-passing code. By the end though, the extension worked just as I’d wanted it to.

The Chrome SDK:
Once I released the Firefox version, many of my friends asked for a Chrome extension as well. I was pleasantly surprised that the Chrome SDK is more straight-forward. The Chrome extension, which has the exact same functionality as the Firefox extension resulted in less code, clearer and more consistent modules than my Firefox extension. Here’s what they did right:

  • Just one API: no separate High-level and low-level APIs. It felt clearer – in-fact, it was so good that I never had to turn to the chat-rooms for help.
  • One storage for all things: There is no special data storage for preferences. This means you only have to deal with one data-store for everything.
  • Open-ended Preferences: In Chrome, the preferences screen is free-form: just another HTML document. Put whatever you want in it, and save the user’s input directly into the extensions’ data store.
  • Consistency across modules: Since all modules have access to the data store, there is no need for special message passing code! This cleaned up my code by quite a bit and essentially standardized the basic structure of all modules.

Just like the Chrome section of this article, I had to write about half as much code for the Chrome extension, once I slogged through all the learning and writing of the Firefox extension :).

Rails 2.3 and options_for_select

I spent half a day on Tuesday trying to debug the following:

<%= f.select :number_of_items, options_for_select([1,2,3,4,5], @number_of_items), {}, {:class => "OrderField"} %>

Here, @number_of_items is the value of the same selection list, if for some reason the user had erroneously entered something in the rest of the form.

Turns out what happens is that the value is returned from the form and then assigned to @number_of_items is a string. Options_for_select throws an error because it is trying to create a selection list with integers as values, but is being told to set a string as the default value.

The form will keep loading up without the default value selected and you’ll waste hours wondering what’s wrong.

Solution: Do some type conversion like so:
<%= f.select :number_of_items, options_for_select([1,2,3,4,5], @number_of_items.to_i), {}, {:class => "OrderField"} %>

..Yup.

Some tricks to make JavaScript projects manageable

Most of the code in playr.me used to sit in one file – player.js. As I added the code for essential capabilities, the file began to grow. It finally got to the point where I often spent more time navigating the file than actually adding code. At the very least, I needed to break it up into multiple files.

But, what would be the best way to break things up into files? And how would I “include” those files into my project when JavaScript doesn’t have includes? Finally, how would communication work? Making calls across files means opening up multiple files later to track bugs across files!

Problem 1: How to include multiple files in a JavaScript project.
Solution: Use JavaScript to create a Script elements for each external file and add them to the document. Here’s how I did it for Playr.me.

Problem 2: Logical structure for dividing the files?
Solution: Divide the project up into modules. I read Anthony Colangelo’s“The Design of Code: Organizing JavaScript”, which is a great write-up on creating modules in JavaScript.

Problem 3: How will the modules communicate?
Of course, one can use ModuleName.methodName() to make method calls.
But there is a better way for certain situations: We can create JavaScript Events that each module can independently trigger and/or respond to!
I read How to Create Custom Events in JavaScript by Craig Buckler. Each playr.me module responds to a custom event called playrStatusChanged and then acts accordingly.

Making Youtube Fast

I attended a talk yesterday by the folks at YouTube about how they try to make a visitor’s experience as fast, or at least fast-feeling as possible. There was so much covered! Here’s what I still remember:

  • YouTube is a single-page app. All the JavaScript is loaded the first time we visit one of their pages. After that, the entire experience is managed using JavaScript callbacks. This saves a lot of bandwidth since the only thing that changes is content.
  • They do a lot of A/B testing on real users.
  • They’ve created a library to handle browser interactions back / forward / server callbacks / prioritizing the loading of on-screen objects… they love it.. they said they’ll open-source it soon! In the meantime.. roll your own.
  • They worry about the perception of speed and not just actual speed. The red loading bar on top of the new interface, for example, makes users feel that it’s running faster than it is.
  • Prioritize the loading of objects that are above the fold. Current JavaScript XHR lets you do this. Take advantage of it.
  • Request / send objects from the server in bunches, rather than all-at-once.
  • Try to stay mindful of when users expect certain items to work. For example, they probably expect the video to load first and keep playing while the other objects, thumbnails keep loading. So, start playing the video first and make sure it is sufficiently buffered before loading other things.

There was a bunch more discussed. Some of it was YouTube specific and some of it is currently beyond my understanding. The speakers were all really fantastic.

JavaScript’s function.apply() function..

As seen in A Re-Introduction To JavaScript

JavaScript functions are objects. So, when we declare a function, we get a bunch of other stuff for free! Take, the apply() method, which lets us pass an array into the arguments list of a method! I explain:

Say we have a function that calculates averages from a list of numbers passed into it:

function getAverage() {
  total = 0;
  for (i in arguments) {
    total += arguments[i];
  }
  return total / arguments.length;
}

We can now call this method to get the average of one or more numbers. Calling:
getAverage(4,2,3,5);
returns 3.5

We can also:
getAverage.apply(null, [4,2,3,5]);
which also returns 3.5!

Also, goes to show that functions are indeed Objects!

How to do fullscreen in JavaScript and CSS

I’ve been using the YouTube JavaScript Player API for a project. Something that I needed to figure out was how to create fullscreen controls for my custom video player. Here’s how:

Take the element (probably a div) to be full-screened, and full-screen it with the following JavaScript:

var c = document.getElementById('id_of_div_being_fullscreened'); 
// Browser specific fullscreening:
if (c.requestFullScreen) {
  c.requestFullScreen();
} else if (c.mozRequestFullScreen) {
  c.mozRequestFullScreen();
} else if (c.webkitRequestFullScreen) {
  c.webkitRequestFullScreen();
}

Now, what happens is fun. We can assign CSS properties to elements that are different in full-screen mode than regular mode. When in full-screen mode, the browser adds a class name of :-webkit-full-screen to full-screened elements. There’s probably one for Firefox called :-moz-full-screen, and one for other browsers called :-full-screen. How to use it:

  .MyPlayer:-webkit-full-screen {
    margin-top:0px;
    display:block;
    width:100%;
  }

  .MyPlayer:-webkit-full-screen .DefaultControls {
    z-index:2;
    display:block;
    top: 0px;
    position:absolute;
  }

As an aside: I noticed something while browsing through videos on Vimeo and YouTube: in this day of excellent HTML5 video, they still use Flash. Why? Because the Flash experience is the same across all browsers. When we try to full-screen a video, regardless of browser, the video instantly becomes full-screen, along with the controls, which behave appropriately. I bet the folks that worked on the Flash control don’t have to worry about detecting what browser the user is in.

1 2