Category: Dev

  • Loading test data with Play Framework Evolutions

    Loading test data with Play Framework Evolutions

    In a previous article I described how to load test data that your ScalaTest Play Framework functional tests might need using Play Framework’s Evolutions. This made use of the SimpleEvolutionsReader class and defining evolutions in the test setup code.

    Recently I wanted to also load some test data from a file and so turned to the ClassLoaderEvolutionsReader class which loads resources from the class path.

    The trouble was I wanted to apply the schema from my standard evolution files first and then load the test data. The ClassLoaderEvolutionsReader requires evolutions revisions to start at 1 which would conflict with the standard application evolutions already applied.

    So I wrote a custom SingleRevisionClassLoaderEvolutionsReader that reads a single revision from the class path.


    import play.api.db.evolutions.{ClassLoaderEvolutionsReader, Evolution}
    import play.api.libs.Collections
    /***
    * Evolutions reader that reads a single revision from the class path.
    *
    * @param revision the revision number to load
    * @param prefix A prefix that gets added to the resource file names
    */
    class SingleRevisionClassLoaderEvolutionsReader(val revision: Int, val prefix: String) extends ClassLoaderEvolutionsReader(prefix = prefix) {
    override def evolutions(db: String): Seq[Evolution] = {
    val upsMarker = """^#.*!Ups.*$""".r
    val downsMarker = """^#.*!Downs.*$""".r
    val UPS = "UPS"
    val DOWNS = "DOWNS"
    val UNKNOWN = "UNKNOWN"
    val mapUpsAndDowns: PartialFunction[String, String] = {
    case upsMarker() => UPS
    case downsMarker() => DOWNS
    case _ => UNKNOWN
    }
    val isMarker: PartialFunction[String, Boolean] = {
    case upsMarker() => true
    case downsMarker() => true
    case _ => false
    }
    loadResource(db, revision).map { stream =>
    val script = scala.io.Source.fromInputStream(stream).mkString
    val parsed = Collections.unfoldLeft(("", script.split('\n').toList.map(_.trim))) {
    case (_, Nil) => None
    case (context, lines) => {
    val (some, next) = lines.span(l => !isMarker(l))
    Some((next.headOption.map(c => (mapUpsAndDowns(c), next.tail)).getOrElse("" -> Nil),
    context -> some.mkString("\n")))
    }
    }.reverse.drop(1).groupBy(i => i._1).mapValues { _.map(_._2).mkString("\n").trim }
    Evolution(
    revision,
    parsed.getOrElse(UPS, ""),
    parsed.getOrElse(DOWNS, ""))
    }.toList
    }
    }
    object SingleRevisionClassLoaderEvolutionsReader {
    def apply(revision: Int, prefix: String = "") = new SingleRevisionClassLoaderEvolutionsReader(revision, prefix)
    }

    You can then place your evolution files in /test/resources/evolutions/default/ and apply them after your standard evolutions in your test setup, for example, if your test data was in a file called 100.sql :

    // Load the database schema
    Evolutions.applyEvolutions(db)
    
    // Load the test data
    Evolutions.applyEvolutions(db, SingleRevisionClassLoaderEvolutionsReader(revision = 100))
  • Java enums can implement interfaces

    Java enums can implement interfaces

    Java enums are handy things. Often used as an effective replacement for simple int constants you can also add methods and fields and make them implement arbitrary interfaces.

    Joshua Bloch has lots of interesting things to say about them in his excellent book, Effective Java. Item 34 describes a way to emulate extensible enums with interfaces but having an enum implement an interface can also be a simple way to split a large, unwieldy enum into smaller parts.

    Consider a simple enum that contains a code and a message:

    public enum Code {
        
        CODE_A("alpha"),
        CODE_B("bravo"),
        CODE_Y("yankee"),
        CODE_Z("zulu");
    
        private final String message;
    
        public String getMessage() {
            return message;
        }
    }
    

    Perhaps you have hundreds of codes associated with different parts of your application but you want to do some common processing of them. Rather than maintaining a big enum full of all the codes in your system you could extract an interface and have your enums implement it. The enums could then be much smaller and placed with or near the code that uses them directly.

    Here’s the Code interface:

    public interface Code {
        String getMessage();
    }
    

    Your smaller enums would implement it:

    public enum EarlyCode implements Code {
        CODE_A("alpha"),
        CODE_B("bravo");
    
        private final String message;
    
        @Override
        public String getMessage() {
            return message;
        }
    }
    

    The only downside is you can’t share behaviour (at least no more than Java 8 allows for interfaces with default and static methods) but in this case it doesn’t matter because the amount of code is small.

  • Jumbled Headers

    Jumbled Headers

    Have you ever noticed misspelled HTTP response headers?

    $ http -h http://www.dmoz.org/Computers/Programming/Languages/Python/Books/
    HTTP/1.1 200 OK
    Connection: close
    Content-Encoding: gzip
    Content-Language: en
    Content-Length: 9907
    Content-Type: text/html;charset=UTF-8
    Cteonnt-Length: 33416
    Date: Tue, 12 Apr 2016 12:26:06 GMT
    Server: Apache
    

    That ‘Cteonnt-Length’ sure looks weird!

    According to this StackOverflow answer, the jumbled header contains the uncompressed size of the response and, sure enough, it does seem to be the case. But why?

    It seems like this is a trick employed by hardware appliances (eg Citrix NetScaler) to ‘remove’ a header without affecting the check-sum value.

  • Oops! I committed to the wrong branch

    Oops! I committed to the wrong branch

    It is common when working with git to use lots of branches. Occasionally you might accidentally commit to the wrong branch but thankfully git makes it easy to put these commits in the right place.

    It’s worth noting that the fixes described here are only for when you haven’t pushed anything to a remote branch otherwise you will be changing history that someone else might have already pulled.You don’t want to do that.

    There are two scenarios we will consider.

    Move recent commits into new <branch> instead of <master>

    This is when you made a couple of commits to master but now realise they should have been split into a separate branch.

    This is easy to fix: first make a copy of the current state of your master branch, then roll it back to the previous commit. For example, if the commit hash before your changes was a6b4c974:

    git branch <branch>
    git reset --hard a6b4c974
    git checkout <branch>

    Accidentally committed on <master> instead of <branch>

    This is when you should have committed to an existing branch but accidentally committed to master; a similar situation to the first but requires some different voodoo to fix.

    Again, we make a copy of the current state in a tmp branch and reset master. Then we use the three argument form of git rebase –onto to remove the offending commits. We want to get the commits that are now in <tmp> into <branch>. We want to make <branch> the new base of <tmp> starting at the point where <tmp> diverged from master. We then merge these changes into <branch> where they should have been committed originally.

    git branch <tmp>
    git reset --hard a6b4c974
    git rebase --onto <branch> <master> <tmp>
    git checkout <branch>
    git merge <tmp>
    git branch -d <tmp>

     

  • Time zone conversion in Google Sheets

    Time zone conversion in Google Sheets

    Google Sheets does not have a built in way of converting time zone data but by using the power of Moment.js and Google’s script editor we can add time zone functionality to any sheet.

    First, we need to add the Moment.js code as a library that can be shared between different documents. This is the javascript library that adds date and time zone manipulation support. A good way to do this is to create it from Google Drive so it can be easily edited and shared with all your sheets.

    In Google Drive, go to New > More > Connect more apps and choose Google Apps Script. Now, create a new Google Apps Script document. This will open the Google script editor. Call the project ‘Moment’ and create two files in the project called moment.js and moment-timezone.js using the moment and moment-timezone libraries. Make sure you choose one of the moment-timezone files with time zone data. You should end up with a project something like this:

    To easily use this in multiple sheets we can publish it as a library. Go to File > Manage versions and save a new version. We are nearly finished here but before we move on go to File > Project properties and make a note of the project key, you will need this to refer to your library in your sheets.

    In our test we are going to use our new library to convert times in a local time zone to UTC.

    Create a new Google sheet and enter a date in A1 and a time zone in B1, like this:

    Go to Tools > Script editor and create a project called Functions. In the script editor, go to Resources > Libraries and using the project key you made a note of before add your Moment library and select version 1. I prefer to use a shorter identifier like ‘m’. Click save and your library is now accessible to your sheet’s script. We can create a function to convert to UTC like this:

    function toUtc(dateTime, timeZone) {
      var from = m.moment.tz(dateTime, timeZone);
      return from.tz("Etc/UTC").format('YYYY-MM-DD HH:mm:ss');
    }

    Save your project and you can now use this function in your sheets like this:

    =toUtc(TEXT(A1, "YYYY-MM-DD HH:mm:ss"), B1)
  • Loading test data with ScalaTest + Play

    Loading test data with ScalaTest + Play

    The ScalaTest + Play library provides a couple of useful traits for when your ScalaTest Play Framework functional tests need a running application for context. The OneAppPerSuite trait will share the same Application instance across all tests in a class whereas the OneAppPerTest trait gives each test its own Application instance.

    These traits will ensure you have a running application for your tests but if you want to test code that operates on a database it can be helpful to load some known test data before each test and clean it up afterwards. For this, we can mix in a ScalaTest before-and-after trait.

    The BeforeAndAfter trait lets you define a piece of code to run before each test with before and/or after each test with after. There is also a BeforeAndAfterAll trait which invokes methods before and after executing the suite but resetting the database for each test makes for better test isolation.

    Here is a base test class that sets up a running application and uses the Evolutions companion object to load and clean up the database. Note this class uses Guice dependency injection to retrieve the database object but you can also easily connect to a database using the Database companion object.

    We also override the db.default.url config value to point to a test database.

    import org.scalatest.BeforeAndAfter
    import org.scalatestplus.play.{OneAppPerSuite, PlaySpec}
    import play.api.Application
    import play.api.db.Database
    import play.api.db.evolutions.{Evolution, Evolutions, SimpleEvolutionsReader}
    import play.api.inject.guice.GuiceApplicationBuilder
    
    abstract class IntegrationSpec extends PlaySpec with OneAppPerSuite with BeforeAndAfter {
    
      implicit override lazy val app: Application = new GuiceApplicationBuilder()
        .configure("db.default.url" -> sys.env.getOrElse("DB_TEST_URL", "jdbc:mysql://localhost:3306/my_test_db?useSSL=false"))
        .build
    
      before {
        val db = app.injector.instanceOf[Database]
    
        // Load the database schema
        Evolutions.applyEvolutions(db)
    
        // Insert test data
        Evolutions.applyEvolutions(db, SimpleEvolutionsReader.forDefault(
          Evolution(
            999,
            "insert into test (name, amount) values ('test', 0.0);",
            "delete from test;"
          )
        ))
      }
    
      after {
        val db = app.injector.instanceOf[Database]
        Evolutions.cleanupEvolutions(db)
      }
    }

    You can also load evolutions from the file system if you don’t want to define them in the code.