Taka’s blog

A software engineer's blog who works at a start-up in London

Testing Patterns for External Web API Access

Writing automated tests that entail external Web API access isn't an easy job. You may suffer various things such as API rate limits, slowness due to network overheads, little control over API's behaviour, and so on.

I'll introduce four patterns to write robust automated tests involving external Web API access, depending on your needs and restrictions.

I'll call external Web API just API hereunder since it's a sort of acceptable way to refer to Web API in some contexts.

Here are the four patterns:

  1. Real API
  2. Official or community-supported fake API
  3. API request stub (with contract tests)
  4. API wrapper and test doubles (with contract tests)

Roughly speaking, more reliable but costly (in some sense) one comes first in this list. Pattern 2 to 4 involve some sort of test doubles, which usually make your tests less resistant to refactoring. This list is also in order of the layer test doubles belong to - one with test doubles in the more outer layer comes first.

Resilience to refactoring and design changes

Before looking into the four patterns, I'd like to mention this important concept - resilience to refactoring and design changes.

When you use test doubles, your tests will rely on them, of course. That means the code you can refactor without breaking tests is partly determined by your test doubles. Let's look at this example:

class ApiClient {
  exec(params) { ... }
}

class UseCase {
  doSomeBussiness() {
    // Do some business using ApiClient#exec
  }
}

When you write tests for UseCase#doSomeBusiness, you can use a test double in place of ApiClinet#exec. But if you change the interface of ApiClinet#exec, your tests will be useless as it's going to rely on the old interface. But if you don't use any test doubles and rely on the real API, you have no such worries.

Testing a private method has a similar issue. Even though your refactoring of some public method doesn't change its behaviour at all, the tests for the private method may start failing if you change its interface.

This is also about white box testing vs black box testing. You tend to write white box like tests when you use test doubles in an inner layer because you need to know more implementation details to write such test doubles. The more outer layer where you use test doubles, the bigger black box you can conceptually build. For an extreme example, if you write only E2E tests, as long as your E2E tests pass, you can refactor everything without breaking any tests because your tests only cover the largest black boxes a.k.a. behaviours.

So far, it may have sounded like using test doubles, testing private methods, and white box testing were a bad idea - but not really, it depends. The point is tests and test doubles define the interfaces you can change without breaking tests. As long as the interfaces of test targets and test double targets are stable enough, they don't prevent you from refactoring.

This topic also reminds me of The Testing Trophy.

Real API

Just use real API. No test doubles are required. In a sense, this is the easiest and most robust way to write tests involving API.

When

  • You have sandbox API you can safely consume
  • Costs aren't problematic
  • Network overheads are at an acceptable level

Pros

  • Most robust (because it's the real API!)
  • No test doubles required

Cons

  • Sandbox API isn't always available
  • API limit may prevent you from calling API that often
  • It may be quite costly
  • Network overheads may make your tests slow

Official or community-supported fake API

A fake is a type of test double that behaves as if it's a real one. In other words, with a fake, you can write tests as if you use the real one. What you need to do is just replace the real one with a fake somehow.

What I want to mean by a fake here is a fake server that responds to HTTP requests. One of the most popular fakes is LocalStack.

For instance, LocalStack S3 behaves as if it's a real S3. You can put, get, and delete an object against LocalStack S3.

What you need to do is:

  • Get a fake server running
  • Replace the endpoint with the fake server's one

You have to be careful to choose what fakes you'd like to use. Fakes have to conform to the real API. Otherwise, your tests will rely on something unreal.

That's why I explicitly mentioned "official or community-supported". They should be well-made and well-maintained as we need reliable and realistic fake servers.

When

  • There are reliable fake servers
  • You don't like to make real requests
  • You don't like to use test doubles other than fake servers

Pros

  • Robust depending on the reliability of your choice of fake servers
  • No test doubles required other than fake servers

Cons

  • Good reliable fake servers aren't always available
  • It's a bit cumbersome to spin up a fake server every time you run your tests
  • Sometimes, it's a bit hard to replace the endpoint with your fake server's one

API request stub (with contract tests)

This is another way to stub out HTTP requests to Web API. This is usually a bit easier to adopt than fake servers. In JavaScript, you can do this with Nock.

What's important is to write contract tests if possible. Contract tests usually run periodically to confirm the API works as expected. As long as contract tests pass, your HTTP request stubs should be reliable as there is no contract change.

When

  • You don't like to use test doubles other than HTTP request stubs
  • You can't seem to find good fake servers
  • It's too cumbersome to set up fake servers

Props

  • No test doubles required other than HTTP request stubs
  • Easier to introduce than fake servers
  • A bit more control over stubbed API's behaviour
  • Tests run fast

Cons

  • Writing HTTP request stubs can be cumbersome
  • Without contract tests, you may not be able to detect API changes

API wrapper and test doubles (with contract tests)

This is the easiest to implement because you don't need to use extra libraries if you want. Let's have a look at the same example again:

class ApiClient {
  exec(params) { ... }
}

class UseCase {
  doSomeBussiness() {
    // Do some business using ApiClient#exec
  }
}

If you inject an instance of ApiClient into UseCase, you can also inject a test double instead with no hassle. Of course, you can use libraries - in JavaScript, Sinon is quite popular for this purpose.

Contract tests or tests for your wrappers play an important part as well to make your tests robust. As you stub your wrappers, your wrappers should be well tested.

When

  • You can write well-designed wrappers
  • You can't seem to find good fake servers
  • It's too cumbersome to set up fake servers or HTTP request stubs

Props

  • Really easy to introduce
  • Very good control over API's behaviour
  • Easier to understand what's stubbed when you read tests
  • Tests run fast

Cons

  • Your tests may not be very resilient to refactoring and design changes
  • You may need to write more white box like tests
  • Without contract tests, you may not be able to detect API changes

Use JavaScript Custom Errors Better

In the JavaScript world, pitfalls are everywhere; one of them is error handling.

This post introduces a good way of using JavaScript's custom errors.

TL;DR

necojackarc/extensible-custom-error enables you to define handy extensible custom errors easily, which can take either/both an error object and/or an error message.

const ExtensibleCustomError = require('extensible-custom-error');

class MyError extends ExtensibleCustomError {}

new MyError('message'); // Take a message
new MyError(error); // Take an error
new MyError('message', error); //  Take a message and an error

Stack traces in error objects get merged so that you won't lose any error details:

throw new MyError('Unlimited Blade Works', error);
MyError: Unlimited Blade Works
    at wrapErrorWithMyError (/home/necojackarc/custom_error.js:101:11)
Error: Have withstood Pain to create many Weapons
    at throwBuiltinError (/home/necojackarc/custom_error.js:94:9)
    at wrapErrorWithMyError (/home/necojackarc/custom_error.js:99:5)
    at main (/home/necojackarc/custom_error.js:107:5)
    at Object.<anonymous> (/home/necojackarc/custom_error.js:113:1)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
    at Function.Module.runMain (module.js:693:10)

Default Error Handling with JavaScript

JS's try/catch is a bit inferior to others'; you can use only one catch clause. To handle specific errors, MDN advise a use of instanceof like the following:

try {
  throw new TypeError();
} catch (e) {
  if (e instanceof TypeError) {
    console.log('Caught a Type Error')
  } else {
    console.log('Others');
  }
}

You'll see Caught a Type Error when you run it.

You can do the same thing with Promises:

const rejectPromise = () => new Promise((resolve, reject) => {
  reject(new TypeError());
});

rejectPromise().catch((error) => {
  if (error instanceof TypeError) {
    console.log('Caught a TypeError');
  } else {
    console.log('Others');
  }
});

This will also spit out Caught a TypeError. With async/await, you can use regular try/catch, so I don't show examples here.

Pitfalls with custom errors that extend the default Error

Let's see what will happen when you define custom errors just by extending the default error object.

Pitfall 1: Data gets lost with a simple inheritance

Custom errors that extend Error don't work well:

// Incomplete costom error
class MyError extends Error {}

function throwError() {
  throw new MyError('Cutrom error');
}

try {
  throwError();
} catch (error) {
  if (error instanceof MyError) {
    console.log(error);
  } else {
    console.log('Non-custom error');
  }
}

By running the code above, you'll see:

Error: Custom error
    at throwError (/home/necojackarc/custom_error.js:6:9)
    at Object.<anonymous> (/home/necojackarc/custom_error.js:10:3)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
    at Function.Module.runMain (module.js:693:10)
    at startup (bootstrap_node.js:188:16)
    at bootstrap_node.js:609:3

instanceof works properly but the custom error name, MyError, has got lost.

You can get around this issue by overwriting the name property within constructor, or using Error.captureStackTrace available on V8.

class MyError extends Error {
  constructor(...args) {
    super(...args);

    // `this.name = this.constructor.name;` can work, but
    // setting `enumerable` `false` gets it closer to a built-in error
    Object.defineProperty(this, 'name', {
      configurable: true,
      enumerable: false,
      value: this.constructor.name,
      writable: true,
    });

    if (Error.captureStackTrace) {
      Error.captureStackTrace(this, MyError);
    }
  }
}

Replace the old custom error with this one and run the same code above, then you will see:

MyError: Custom error
    at throwError (/home/necojackarc/custom_error.js:27:9)
    at Object.<anonymous> (/home/necojackarc/custom_error.js:31:3)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
    at Function.Module.runMain (module.js:693:10)
    at startup (bootstrap_node.js:188:16)
    at bootstrap_node.js:609:3

Now it works fine.

MDN mentions this solution as well.

Pitfall 2: The constructor only takes a message

Of course, you can pass anything to them because you are using JavaScript, however, the constructors of the error objects only expect to receive a message, not an error object.

In other words, if you wrap a built-in error in your custom error like the following, you won't get an expected result.

function throwError() {
  throw new Error('Built-in error');
}

try {
  throwError();
} catch (error) {
  throw new MyError(error);
}

When you run this, you'll see:

MyError: Error: Built-in error
    at Object.<anonymous> (/home/necojackarc/custom_error.js:31:9)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
    at Function.Module.runMain (module.js:693:10)
    at startup (bootstrap_node.js:188:16)
    at bootstrap_node.js:609:3

It looks good at first glance, but you've lost some of the stack traces.

You can see the same output when you do error.toString(). The output of error.toStirng() is an error name and a message, so it'll be Error: Built-in error.

In short, stack traces are ignored when you pass an error object.

To cope with this issue, you need to add some logic to the constructor to merge the given stack traces to the ones generated here.

With JavaScript built-in errors, as I mentioned above, you can't pass both a message and an error like throw new myError('message', error) unlike Java.

Summary of the Pitfalls around JS Custom Errors

  • Just extending Error isn't good enough to define custom errors
  • Wrapping an error isn't possible by default and some data gets lost

Ideal World

In the ideal world, you can define your own errors like:

class MyError extends Error {}

and you can instantiate your custom errors like:

new MyError('message'); // Take a message
new MyError(error); // Take an error
new MyError('message', error); // Take a message and an error

If only such a world existed...!

Library with which You Can Define Custom Errors Easily

Hello world, I've created a library with which you can define custom errors easily!

github.com

As described in TL;DR, you can use it like the following:

const ExtensibleCustomError = require('extensible-custom-error');

class MyError extends ExtensibleCustomError {}

new MyError('message'); // Take a message
new MyError(error); // Take an error
new MyError('message', error); // Take a message and an error

Examples

Let's use it in practice.

The first example is passing an error object to it as its argument.

const ExtensibleCustomError = require('extensible-custom-error');

class MyError extends ExtensibleCustomError {}

function throwBuiltinError() {
  throw new Error('Unknown to Death, Nor known to Life');
}

function wrapErrorWithMyError() {
  try {
    throwBuiltinError();
  } catch (error) {
    throw new MyError(error);
  }
}

function main() {
  try {
    wrapErrorWithMyError();
  } catch (error) {
    console.log(error);
  }
}

main();

By running the code above, you get:

MyError: Error: Unknown to Death, Nor known to Life
    at wrapErrorWithMyError (/home/necojackarc/custom_error.js:101:11)
Error: Unknown to Death, Nor known to Life
    at throwBuiltinError (/home/necojackarc/custom_error.js:94:9)
    at wrapErrorWithMyError (/home/necojackarc/custom_error.js:99:5)
    at main (/home/necojackarc/custom_error.js:107:5)
    at Object.<anonymous> (/home/necojackarc/custom_error.js:113:1)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
    at Function.Module.runMain (module.js:693:10)

Cool! The full information of the error has been retained! So helpful!

The next example is to pass both a message and an error:

const ExtensibleCustomError = require('extensible-custom-error');

class MyError extends ExtensibleCustomError {}

function throwBuiltinError() {
  throw new Error('Have withstood Pain to create many Weapons');
}

function wrapErrorWithMyError() {
  try {
    throwBuiltinError();
  } catch (error) {
    throw new MyError('Unlimited Blade Works', error);
  }
}

function main() {
  try {
    wrapErrorWithMyError();
  } catch (error) {
    console.log(error);
  }
}

main();

Then,

MyError: Unlimited Blade Works
    at wrapErrorWithMyError (/home/necojackarc/custom_error.js:101:11)
Error: Have withstood Pain to create many Weapons
    at throwBuiltinError (/home/necojackarc/custom_error.js:94:9)
    at wrapErrorWithMyError (/home/necojackarc/custom_error.js:99:5)
    at main (/home/necojackarc/custom_error.js:107:5)
    at Object.<anonymous> (/home/necojackarc/custom_error.js:113:1)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
    at Function.Module.runMain (module.js:693:10)

New error information will stack on the existing error information prettily!

Summary

Beware you need to write some code to retain error information when you use custom errors with JavaScript. To avoid such pitfalls, I've created necojackarc/extensible-custom-error as an npm module.

Oagariyo!

The original post was published on October 10th, 2018, by me in Japanese.

Write Hubot Scripts Easily with Babel ES201x

Using Babel, you can easily write Hubot Scripts with ES201x.

TL;DR

  1. Configure your Hubot following necojackarc-sandbox/hubot-babel-es201x
  2. Code your Hubot Scripts in src/bot directory with Babel ES201x

Prologue

I happened to need to make a bot for ChatOps, when I encountered Hubot written in CoffeeScirpt, which was left behind the times......

......

............

It surprised me a bit that Hubot had never been committed for about 10 months.

I googled it*1 but couldn't find good alternatives. So, I tried persuading myself that this frequency of update showed stability, Hubot already went through many troubles, and decided to adopt Hubot.

However, I no longer want to write CoffeeScript, so I've decided to write Hubot Scripts in ES201x with Babel.

How easy it is

It's really easy because it requires only Babel. Plus, configuration of Babel is very simple.

How it works

Hubot loads .coffee and .js in src/scripts and scripts automatically.

The format of those files is expected as below:

module.exports = (robot) ->
  # your code here

In other words, these are Hubot Scripts.

Taking it into account, I added the following directories this time:

Directory Description
dist For files transpiled
src/bot For Hubot Scripts written in ES201x
src/lib For files written in ES201x*2

Simple explanation of the flow is below:

  1. Make scripts with ES201x in src
  2. Transpile files in src into dist
  3. Get a file in scripts to load transpiled Hubot scripts in dist/bot

Configuration

Generate Hubot following Getting Started With Hubot.

Introduce Babel and required preset and polyfill:

$ npm install -D babel-cli babel-polyfill babel-preset-env

Write Babel configuration in .babelrc:

{
  "only": [
    "src"
  ],
  "presets": [
    ["env", {
      "targets": {
        "node": "current"
      }
    }]
  ]
}

Add build command to package.json:

{
  "scripts": {
    "build": "babel src -d dist"
  }
}

Put a script in scripts, which loads Hubot Scripts transpiled into dist:

'use strict';

const fs = require('fs');
const path = require('path');

const ROOT = "./dist/bot";

const stripExtension = path => path.split('.').slice(0, -1).join('.');

module.exports = (robot) => {
  fs.readdirSync(ROOT).forEach((dir) => {
    require(path.join('..', ROOT, dir)).default(robot);
  });
};

Make necessary directories finally.

$ mkdir -p src/bot src/lib

Write a sample to check Configuration

As a test, put the following Hubot Script in src/bot/konosuba.js and work it:

export default function(robot) {
  robot.hear(/アクア/, (res) => {
    res.reply('駄女神');
  });
}

Then, boot Hubot by bin/hubot:

hubot-babel-es201x> アクア様
hubot-babel-es201x> Shell: 駄女神

It works!

The original post was published on Mar. 12, 2017, by me in Japanese.

*1:12+ Frameworks to Build ChatOps Bots

*2:Prepared for JavaScripts files except for Hubot Scripts

Make Images Watermarked with RMagick (+ CarrierWave)

Summary

This post explains how to cover images with an array of a watermark.
Plus, I show an example how to use it with CarrierWave.

What is Watermark?

Watermark is an image which often covers sample images.
I made a sample of fotowa*1 logo:

f:id:necojackarc:20161014223629p:plain

You almost can't see it as it's really transparent. Let's say combine it with this picture, the image would be like this:

f:id:necojackarc:20161205234711j:plain

This is also the final product of this post.

Make an array of a watermark by mosaic of RMagick

It takes a lot of time to combine images multiply when you make images covered with an array of a watermark. I've tried it once, but it didn't work in realistic time.

So, make an array of a watermark at first and combine it with target images. Then, it really speeds up the process.

mosaic is, as the name shows, a feature which makes mosaic by arraying a lot of images.

image = Magick::Image.read("target.png").first

mark = Magick::Image.read("watermark.png").first
mark.background_color = "none" # If background color is specified, alpha channel becomes that color after combining images

tile = Magick::ImageList.new
page = Magick::Rectangle.new(0, 0, 0, 0)

# Array watermarks to make it appropriate size
(image.columns / mark.columns.to_f).ceil.times do |x|
  (image.rows / mark.rows.to_f).ceil.times do |y|
    tile << mark.dup
    page.x = x * tile.columns
    page.y = y * tile.rows
    tile.page = page
  end
end

One point is the background color of a watermark. For example, the background color of the watermark I already showed is "white".

If you use a watermark whose background color is specified, images will be covered with the background color like this:

f:id:necojackarc:20161206001409j:plain

Use it with CarrierWave

You can easily manipulate images with CarrierWave and RMagick, like process resize_to_fit: [2000, 1000].

As it's okay to use any methods you define for process, so define a method and use it in version:

class ImageUploader < CarrierWave::Uploader::Base
  include CarrierWave::RMagick

  version :watermarked do
    process :watermark
  end

  def watermark
    mark = Magick::Image.read(Rails.root.join("lib", "assets", "watermark.png")).first
    mark.background_color = "none"

    manipulate! do |image|
      tile = Magick::ImageList.new
      page = Magick::Rectangle.new(0, 0, 0, 0)

      (image.columns / mark.columns.to_f).ceil.times do |x|
        (image.rows / mark.rows.to_f).ceil.times do |y|
          tile << mark.dup
          page.x = x * tile.columns
          page.y = y * tile.rows
          tile.page = page
        end
      end

      image.composite(tile.mosaic, 0, 0, Magick::OverCompositeOp)
    end
  end
end

Finally, this uploader makes and stores watermarked images on uploading images!

References

The original post was published on Jul. 8, 2016, by me in Japanese.

*1:Web service that I'm involved in as Software Architect / Lead Software Engineer

Separate JavaScript Easily and Moderately from Rails with WebPack

Introduction

To dive into the world of front-end, I came up with an idea to separate JavaScript easily and moderately from a Rails project.

Some have already shared various ideas, but I found them too much or required more than couple of tools so they seemed like high hurdles. That's why I sought for easier ideas.

TL;DR

I've committed all settings that I mentioned here on initial commit.

Advantages

  • To utilize good points of Sprockets
  • To introduce only WebPack as additional tool, practically
  • To set up a few items, which are easy to do
  • Almost maintenance-free, I believe

Summary

As the following flow, you will program, build and publish your application with JavaScript:

  1. Program JavaScript under frontend directory
  2. Build JavaScript you developed under frontend to app/assets/javascripts
  3. Publish files under app/assets/javascripts without combining files

I think a good point of Sprockets is to publish files with fingerprint in the 3rd part of the flow. It costs a lot to cover this feature using other build tools for front-end.

In other words, if you leave it to Sprockets, you can separate JavaScript from Rails without large scale settings.

Views are still in Rails but the separation of this extent is good enough because if you want to clear all front-end from Rails, it might be okay to use Rails API in the first place.

Concrete Flow

To concretize summary a little, the flow would be as below:

  1. Do modern JS development in frontend/src/javascripts/hoge.js
  2. Build frontend/src/javascripts/hoge.js with WebPack and output assets/javascripts/hoge.js
  3. Publish assets/javascripts/hoge.js with fingerprint using Sprockets

Tools and Versions

Basically, only WebPack is the additional tool. To transpile them on a build with WebPack, Babel is introduced.

  • Sprockets 3.4.1 (+ Rails 4.2.5)
    • Publication (with MD5 fingerprint)
  • WebPack 1.12.9 (+ Babel 6.1.18)
    • Build

Structure of Directories

Add frontend directory to the root of a Rails project.

$ ls
Gemfile       README.md     app/          config/       db/           lib/          spec/
Gemfile.lock  Rakefile      bin/          config.ru     frontend/     public/       vendor/

The structure of the frontend directory is as below:

$ tree frontend -I node_modules
.
├── config
│   ├── development
│   │   └── webpack.config.js
│   └── production
│       └── webpack.config.js
├── package.json
├── src
│   └── javascripts
│       └── application.js
└── test
    └── javascripts

Roughly, it consists of for elements:

  • package.json
  • Config of WebPack
  • Directory for source files
  • Directory for test files

Procedure of settings

I'll explain the procedure of settings.

  1. Set Sprockets to publish JS files separately
  2. Add settings of JS build to WebPack
  3. Hook build by WebPack to Precompile

1. Set Sprockets to publish JS files separately

As the default settings, Rails binds all JavaScript files into application.js to publish it, so we need to change this setting.

Clean up under app/assets/javascripts

Make the directory empty because app/assets/javascripts will be the directory where built JS files are placed.

Add the following lines to .gitignore to exclude this directory from git management.

!/app/assets/javascripts/.keep
/app/assets/javascripts

Plus, as I wanted to publish CSS files separately, I removed *= require_tree . from app/assets/stylesheets/application.scss.

Add targets of precompile by Sprockets

As only application.js and application.css are the targets as the default, add settings to config/initializers/assets.rb to change all JavaScript and CSS files to be compile targets.

Rails.application.config.assets.precompile += %w(*.js *.css)

Exclude files whose names start with underscore

Some gems related to views sometimes raise errors at precompile.

In this case, it's fine to use the following setting which excludes files starting from underscore from the targets instead of the above setting.

Rails.application.config.assets.precompile << /(^[^_\/]|\/[^_])[^\/]*(\.js|\.css)$/

Supplements: How to include JavaScripts and CSSs in Rails

Respectively, calling the following methods inside view templates enables you to use them:

  • javascript_include_tag helper
  • stylesheet_link_taghelper

2. Add settings of JS build to WebPack

Install WebPack and Bable which is used as transpiler with WebPack. If package.json isn't right under frontend, execute npm init.

$ npm init
$ npm install -D webpack babel babel-loader babel-core

Example: Use React.js, ES2015 and Stage2

As an example, do settings to use React.js (0.14.3), ES2015 and Stage2.

At first, install React.js:

$ npm install --save react react-dom

After that, install required presets of Babel:

$ npm install -D babel-preset-react babel-preset-es2015 babel-preset-stage-2

To enable WebPack to transpile JavaScript files, make setting files:

module.exports = {
  devtool: 'inline-source-map',
  entry: {
    application: './src/javascripts/application.js',
  },
  output: {
    path: '../app/assets/javascripts',
    filename: '[name].js'
  },
  module: {
    loaders: [
      {
        test: /\.js$/,
        exclude: /node_modules/,
        loader: 'babel?presets[]=react,presets[]=es2015,presets[]=stage-2'
      }
    ]
  }
}

If you exclude devlool options, you can use it as production settings, I believe:

module.exports = {
  entry: {
    application: './src/javascripts/application.js',
  },
  output: {
    path: '../app/assets/javascripts',
    filename: '[name].js'
  },
  module: {
    loaders: [
      {
        test: /\.js$/,
        exclude: /node_modules/,
        loader: 'babel?presets[]=react,presets[]=es2015,presets[]=stage-2'
      }
    ]
  }
}

Now, if you execute webpack --config config/{enviroment}/webpack.config.js, JavaScripts registered as entries will be built to app/assets/javascripts/*.js.

Using the above settings, frontend/src/javascripts/application.js will be built to app/assets/javascripts/application.js.

You just need to add new entries and include them in views using helper if you want to add another JavaScript files.

Using npm run

Preparing commands such as npm run build is handy, so register the following three commands:

  • release: webpack --config config/production/webpack.config.js"
  • build: webpack --config config/development/webpack.config.js"
  • watch: webpack --watch --config config/development/webpack.config.js"
{
  "scripts": {
    "release": "webpack --config config/production/webpack.config.js",
    "build": "webpack --config config/development/webpack.config.js",
    "watch": "webpack --watch --config config/development/webpack.config.js"
  }
}

Then, if you type npm run build right under frontend, build with WebPack will be exceuted.

3. Hook build by WebPack to Precompile

Finally, hook npm run release just before executing assets:precompile to do nothing at deployment:

task :build_frontend do
  cd "frontend" do
    sh "npm install"
    sh "npm run release"
  end
end

Rake::Task["assets:precompile"].enhance(%i(build_frontend))

Now, if you type rake assets:precompile, npm run release will be done just under frontend before assets:precompile.

Conclusion

To separate JavaScript easily and moderately from Rails, I adopt this approach:

  1. To build JavaScripts to app/assets/javascripts with WebPack
  2. Publish files under app/assets/javascripts as they are with Sprockets

The pros of this approach are that there are only a few settings and additional tools, and easy to maintain. It makes settings of building JavaScript simple and not required to replace Sprockets forcefully.

After I read Why we should stop using Grunt & Gulp, I've been wondering if I can set it succinctly without Gulp. I've realized that thanks to accepting Sprockets easier than I expected.

The isolation level seems to be moderate, I think.

Postscript: Complete separation

I said that the benefit of Sprockets is to publish them with fingerprints but I found a plugin which can do it for WebPack.

You can separate JavaSscript from Rails completely because WebPack with this plugin will also publish them with fingerprints.

After building JavaScripts, JSON as below will be outputted for resolving paths.

{
    "one": {
        "js": "/js/one_2bb80372ebe8047a68d4.bundle.js"
    },
    "two": {
        "js": "/js/two_2bb80372ebe8047a68d4.bundle.js"
    }
}

You can see an example with Rails at Using this with Rails in README and it looks really easy.

References

The original post was published on Nov. 29, 2015, by me in Japanese.

Went to RubyKaigi 2016 as a Ruby Sponsor

I went to RubyKaigi 2016 that was held at Kyoto International Conference Center in Kyoto, Japan from Sep. 8th to 10th.

rubykaigi.org

f:id:necojackarc:20160915003923j:plain

My company, PIXTA, was one of the Ruby sponsors*1 of this conference.

f:id:necojackarc:20160915004044j:plain

I had plans on the Saturday so I was able to attend only the first two days but I enjoyed the conference well.

Aside from the conference, the main shop of Tenkaippin Ramen (天下一品総本店) is located near the venue, so I had a ramen which was only available in the shop, Ramen with Beef Sinew (牛すじラーメン). That was great!

f:id:necojackarc:20160915005424j:plain

Many things in Kansai region brought back memories since I went Osaka University, and I met up with some friends in Osaka during this trip.

The trip was so awesome and I had a lot of fun!

*1:Ruby sponsors are the supreme sponsors!

Cool Way to Control Retry with ActiveJob

ActiveJob brings benefits but it has just simple features yet.

So, I've made a module with the APIs published officially to add some features of controlling retry.

An Issue about Retry

ActiveJob doesn't have enough features about retrying jobs as of today. It only provides theretry_job*1 which enqueues itself again with some options.

An naive sample is here:

class RetryJob < ApplicationJob
  queue_as :default

  rescue_from(StandardError) do
    retry_job(wait: 5.minutes)
  end

  def perform(*args)
    # Do something later
  end
end

Is seems okay. When a job fails for some reason, the job is enqueued and performed in 5 minutes.

However, what if you want to limit the number of retry times? On the above sample, if a job never succeeds, it retries forever. There is, unfortunately, no way to control it with ActiveJob on default settings.

Of course, you can find some gems like ActiveJob::Retry, but there is no dominant gems in this field yet. As far as I can make out, ActiveJob::Retry is the gem gathering stars the most in Github though, it is not enough sophisticated to use in production.

This is an alpha library in active development, so the API may change.

isaacseymour/activejob-retry

Plus, I feel the gem is kinda thick.

What We Really Want to Do

I think it's gonna be okay if we can set the limit number of retires and find out the number of attempts and whether retry count is exceeded or not.

So, what we want to do are:

  • Setting the number of retry limit
  • Finding out the attempt number
  • Checking whether the retry limit is exceeded or not

Like this:

class LimitedRetryJob < ApplicationJob
  queue_as :default
  retry_limit 5

  rescue_from(StandardError) do |exception|
    raise exception if retry_limit_exceeded?
    retry_job(wait: attempt_number**2)
  end

  def perform(*args)
    # Do something later
  end
end

Let's implement these methods.

How to Implement it

To tell you the truth, the official document tells us a great idea of that. Overriding the serialize and the deserialize enables us to carry over instance variables which contain serializable objects.

Now, we can implement the above idea like this:

class ApplicationJob < ActiveJob::Base
  DEFAULT_RETRY_LIMIT = 5

  attr_reader :attempt_number

  class << self
    def retry_limit(retry_limit)
      @retry_limit = retry_limit
    end

    def load_retry_limit
      @retry_limit || DEFAULT_RETRY_LIMIT
    end
  end

  def serialize
    super.merge("attempt_number" => (@attempt_number || 0) + 1)
  end

  def deserialize(job_data)
    super
    @attempt_number = job_data["attempt_number"]
  end

  private

  def retry_limit
    self.class.load_retry_limit
  end

  def retry_limit_exceeded?
    @attempt_number > retry_limit
  end
end

If you put this ApplicationJob, you will be able to set a limit on each jobs through ApplicationJob.retry_limit, get the number of attempts via ApplicationJob#attempt_number, and check if the retry count exceeds the limit or not calling ApplicationJob#retry_limit_exceeded?.

  • ApplicationJob.retry_limit
    • To set the number of retry limit
  • ApplicationJob#attempt_number
    • To Find out the attempt number
  • ApplicationJob#retry_limit_exceeded?
    • To check whether the retry limit is exceeded or not

Use in production

I’ve made a module based on this idea for use in production since it’s not a good idea to add methods, which not all subclasses require, to the superclass.

necojackarc/active_job_retry_controlable.rb - Gist

The module puts ApplicationJob.retry_limit, ApplicationJob#attempt_number and ApplicationJob#retry_limit_exceeded? on your jobs.

Pros

It only calls APIs declared officially, so it's not easy to brake. Just providing simple methods, you can easily make your own retry logic.

Cons

It's dead simple, so you need to implement your own retry features. It never enables jobs to retry themselves automatically.

*1:Apparently, you can use the enqueue instead of the retry_job. Both look identical. ref: ActiveJob::Enqueuing