If you're interested in gigahorse-github itself, README contains the full documentation.
I also wrote Extending Gigahorse page describing the overview of how to write a Gigahorse plugin, which is more or less the same as how one would write a Dispach plugin. As I wrote there, the JSON data binding is auto generated from a schema.
For me, gigahorse-github was as much a proof of concept for sbt-datatype as it was for Gigahorse. It did end up exposing minor bugs on all components along the stack, so it was a fruitful exercise.
There's a "pattern" that I've been thinking about, which arises in some situation while persisting/serializing objects.
To motivate this, consider the following case class:
scala>caseclass User(name: String, parents: List[User])
defined class User
scala>val alice = User("Alice", Nil)
alice: User = User(Alice,List())
scala>val bob = User("Bob", alice :: Nil)
bob: User = User(Bob,List(User(Alice,List())))
scala>val charles = User("Charles", bob :: Nil)
charles: User = User(Charles,List(User(Bob,List(User(Alice,List())))))
scala>val users = List(alice, bob, charles)
users: List[User]= List(User(Alice,List()), User(Bob,List(User(Alice,List()))),
The important part is that it contains parents field, which contains a list of other users.
Now let's say you want to turn users list of users into JSON.
This is part 3 on the topic of sjson-new. See also part 1 and part 2.
Within the sbt code base there are a few places where the persisted data is in the order of hundreds of megabytes that I suspect it becomes a performance bottleneck, especially on machines without an SSD drive.
Naturally, my first instinct was to start reading up on the encoding of Google Protocol Buffers to implement my own custom binary format.
microbenchmark using sbt-jmh
What I should've done first, is start benchmarking. Using @ktosopl (Konrad Malawski)'s sbt-jmh, setting up a microbenchmark is easy. All you have to do is pop that plugin into your build. and create a subproject that enables JmhPlugin.
Two months ago, I wrote about sjson-new. I was working on that again over the weekend, so here's the update.
In the earlier post, I've introduced the family tree of JSON libraries in Scala ecosystem, the notion of backend independent, typeclass based JSON codec library. I concluded that we need some easy way of defining a custom codec for it to be usable.
roll your own shapeless
In between the April post and the last weekend, there were flatMap(Oslo) 2016 and Scala Days New York 2016. Unfortunately I wasn't able to attend flatMap, but I was able to catch Daniel Spiewak's "Roll Your Own Shapeless" talk in New York. The full flatMap version is available on vimeo, so I recommend you check it out.
sbt internally uses HList for caching using sbinary:
and I've been thinking something like an HList or Shapeless's LabelledGeneric would be a good intermediate datatype to represent JSON object, so Daniel's talk became the last push on my back.
In this post, I will introduce a special purpose HList called LList.
sjson-new comes with a datatype called LList, which stands for labelled heterogeneous list. List[A] that comes with the Standard Library can only store values of one type, namely A. Unlike the standard List[A], LList can store values of different types per cell, and it can also store a label per cell. Because of this reason, each LList has its own type. Here's how it looks in the REPL:
As a favorite weekend activity for the Scala programmers, I wrote my own JSON library called sjson-new.
sjson-new is a typeclass based JSON codec library, or wit for that Jawn. In other words, it aims to provide sjson-like codec facility in a backend independent way.
In terms of the codebase I based it off of spray-json, but conceptually it's close to Scala Pickling in the way it deals with data. Unlike Pickling, however, sjson-new-core is free of macros and runtime reflection beyond normal pattern matching.
The disk on your machine is fundamentally a stateful thing, and sbt can execute the tasks in parallel only because it has the full control of the effects. Any time you are running both sbt and an IDE, or you're running multiple instances of sbt against the same build, sbt cannot guarantee the state of the build.
With sbt 1.0 in mind, I have rebooted the sbt server effort. Instead of building something outside of sbt, I want to underengineer the whole thing. This means throwing out previously made assumptions that I think are non-essential such as automatic discovery and automatic serialization. Instead I want to make something small that we can comfortably merge into sbt/sbt codebase. Lightbend holds Engineering Meeting a few times a year where we all fly to a location and have discussions face to face, and also do an internal "hackathon." During the Februay code retreat in beautiful Budapest, Johan Andrén (@apnylle), Toni Cunei, and Martin Duhem joined my proposal to work on the sbt server reboot. The goal was to make a button on IntelliJ IDEA that can trigger a build in sbt.
There’s been some discussions around sbt 1.0 lately, so here is a writeup to discuss it. This document is intended to be a mid-term mission statement. A refocus to get something out. Please post on sbt-dev mailing list for feedback.