So, here is a recap of a few things I have released since. And also, how is it leading to a substantial growth in my Rust knowledge.
In an attempt to make adoption easier, I setup ksunami-docker so that running
ksunami
can be ever easier; in Docker, Kubernetes or wherever you
need. For example:
apiVersion: v1
kind: Pod
metadata:
name: ksunami
labels:
ecosystem: kafka
purpose: workload-generation
spec:
containers:
- name: ksunami-container
image: kafkesc/ksunami:latest
args: [
"--brokers", "B1:9091,B2:9091,...",
"--topic", "MY_ONCE_A_DAY_SPIKE_TOPIC",
"--min-sec", "86310",
"--min", "10",
"--up-sec", "10",
"--up", "spike-in",
"--max-sec", "60",
"--max", "10000",
"--down-sec", "20",
"--down", "spike-out",
]
The Docker image is designed so that all the arguments passed to the docker image,
are passed directly to the internal ksunami
binary. So, the exact same
usage instructions apply.
As you would expect, for each release of ksunami there will be a corresponding release of ksunami-docker with a matching docker image tag.
At the time of writing, kafkesc/ksunami:latest
is kafkesc/ksunami:v0.1.7
.
__consumer_offsets
Kafka TopicUnless you decide to track your Consumer Offset outside of Kafka, you are likely using the default mechanisms to commit your offsets back to Kafka itself.
Kafka uses a special internal topic to store that information:
__consumer_offsets
. The documentation about Kafka internal
Consumer Offset Tracking is a bit wanting, but there are plenty of articles
about this topic.
For Kafkesc, where I’m writing everything in Rust in the spirit of learning by doing, I needed a parser for the records that are in this topic. The records keys and values are an entirely bespoke binary format, designed for this very narrow and specific need of tracking Consumer Offsets: nothing generic would do.
I couldn’t find anything in Rust that was able to parse every record and every fields in it, so I built one: kafkesc/konsumer_offsets.
It has the following features:
__consumer_offsets
messages out thereGroupMetadata
messages: beyond what even the Kafka own parser can doIf you need to consume the __consumer_offsets
Kafka Topic, and are looking
for a solid Rust parser,
konsumer_offsets
v0.1.1
is on
crates.io. Give it a whirl.
When I was reverse engineering Kafka internal logic to implement
kafkesc/konsumer_offsets, I was looking for the Rust equivalent of
Java ByteBuffer
: I was looking for a simple wrapper to place around the
raw bytes, and use a plain English interface that would:
Initially I didn’t find a crate that would do it for me, so I started
writing basic functions parse_<type>()
into the
konsumer_offsets
project
I was working on.
After a few parsing functions, I realised that a simple yet effective generalization was possible, and that would give me a chance to write my first Rust. So I did.
That lead to creating bytes_parser, currently at version
v0.1.4
on crates.io.
I later learned that I could have used nom, but I still think it would have felt like shooting a fly with a cannon, so I’m glad I went my own way.
While nom seems like an excellent and feature reach crate, I needed a simple
wrapper like what I described above. This is also because the work required
was to reverse-engineer the Kafka source code (Scala): in there the logic is
based on types very similar to Java ByteBuffer
.
I saw no reason not to put together a super-simple solution, to call my own.
This quick blogpost is a way to crystallize how important is for people like me, to have an active-part in learning: reading a documentation and trying to retain information for later use, just doesn’t work.
At university it was the same: the Computer Science classes that I have absorbed best, are the one that had big laboratory assignment, where you would put to use the concepts.
I know, maybe I’m (like we say in Napoli) discovering hot water here. But I feel very frequently in this industry, that because we are all about efficiency and we hate repeating ourselves, we rely on writing long, information-dense documentations, and then expect people to absorb it.
It doesn’t work for me, and I bet it also doesn’t work for many others.
In the specific, here are a few things about Rust that working on Kafkesc projects lead me to use and pick up (so far):
cargo clippy
match
are the best way to model (exhaustively) multi-branch logicArc
and RwLock
to make multi-threaded async programming easyClone
traitrustdoc
(even for macro-generated function!)And probably a lot more that I can’t think of right now.
A lot of what I have listed above was already known to me. But by trying to apply business ideas to Rust idioms, trying to model what was in my head, to what Rust wants you to write, is leading to a deeper understanding and learning of this amazing programming language.
And my admiration and respect for it and the people behind it, has grown too.
The beauty of not having a deadline, means that I can stop every now and then, and hone something until it looks great, before I move on. Where in a business setting one should never let perfect get in the way of great, with Kafkesc I totally can.
And if a community spawns out of it at some point, that would be a cherry on top. For now, I’m just happy to share my Kafka-centric solutions, written in Rust.
People that have worked with me, likely know or can figure out where I’m heading with this organization: for now, I’ll keep the overarching plan to myself.
]]>In the process, I realised that I needed a way to spin up a Kafka cluster for development, and I needed a producer of Kafka records, that was able to behave in accordance to specific scenarios.
I’m also still learning Rust, and so this was the perfect excuse: fresh project, a language I want to become proficient in, and time to learn-by-doing.
From GitHub:
Ksunami is a command-line tool to produce a constant, configurable, cyclical stream of (dummy) records against a Kafka Cluster Topic.
If you are experimenting with scalability and latency against Kafka, and are looking for ways to reproduce a continues stream of records, following a specific traffic pattern that repeats periodically, Ksunami is the tool for you.
Ksunami offers a way to set up the production of records, expressing the scenario as a sequence of “phases” that repeat indefinitely. Records content is configurable but random: the purpose of the tool is to help with performance and scalability testing of your infrastructure.
Ksunami (crates.io) is a command line tool, useful to reproduce typical production scenarios that are harder to do artificially without a lot of preparation.
# Example: Low rec/sec, but spike of 1000x once-a-day, lasting 60 seconds
$ ksunami \
--topic \
... \
--min-sec 86310 \ # i.e. 24h - 90s
--min 10 \ # most of the day, this topics sees 10 rec/sec
\
--up-sec 10 \ # transitions from min to max within 10 sec
--up spike-in \ # sudden jump: 10k rec/sec
\
--max-sec 60 \ # a spike of just 60s
--max 10000 \ # producing at 10k rec/sec
\
--down-sec 20 \ # transitions from max to min within 20 sec
--down spike-out \ # sudden drop: back to just 10 rec/sec
...
Ksunami comes from spending a few years managing Kafka infrastructure, and having to address problems that occur in very specific, very high-throughput situations. Situations that… wake you up at night.
In the repo I provided some ideas on how Ksunami can be used. And, even in this early stage, it already provides many options, to tailor to very specific needs.
# Records in a wavy-pattern over the 24h cycle
$ ksunami \
--topic \
... \
--min-sec 21600 \ # first quarter of the day
--min 1000 \ # 1k rec/sec
\
--up-sec 21600 \ # second quarter of the day
--up ease-in-out \ # stable rise
\
--max-sec 21600 \ # third quarter of the day
--max 3000 \ # 3k rec/sec
\
--down-sec 21600 \ # fourth quarter of the day
--down ease-in-out \ # stable decline
...
Ksunami is built around the concept of 4 phases and transitions: in the
README.md
I provided an extended explaination of its
core concepts. And if you
want to know how
Cubic Bézier curves
fit into producing records to Kafka, give it a read.
At the time of writing, Ksunami is at version
v0.1.x
, and I delineated
a bit of a roadmap in the issue tracker.
Even if we are just at 0.1.x
, Ksunami is already sporting a multi-threaded async core,
based on the superb Tokio, and a rich command line interface,
thanks to the awesome clap.
Earlier I was hinting at Kafka Development Cluster:
A battery-included BUT development-only pairing of
Makefile
&docker-compose.yml
, designed to quickly spin-up a Kafka cluster for development purposes.
kafka-dev-cluster
has
all someone (I?) needs to get a local dev cluster going, and it comes with some
super simple Makefile
commands, so I can get bytes in and out of Kafka fast.
There is no installation: just clone the repo, and start the cluster.
$ git clone https://github.com/kafkesc/kafka-dev-cluster.git
$ cd kafka-dev-cluster
$ make start
I have half a mind to add a Prometheus instance to it, so that it can spin up a dashboard on a different port. It could help with local monitoring. Ah, and maybe a Schema Registry?
I created an dedicated GitHub Organization to host what I’m doing. Yes! It was supposed to be called Kafkesque, but someone decide to take that username, and use it to make 2 commits in 5 years ;-(. So I went for a funny alternative: Kafkesc.
In this organization I’m going to host Kafka-related repositories and projects, as the needs arise.
Ksunami is a Records Producer: I can tell it to produce more or less, at specific intervals, with specific spikes and valleys, tailoring its behaviour to the scenarios I want.
I now need a Records Consumer! Something that can be made to misbehave? That every 10 minutes takes a nap and stops consuming? That turns on once a day, consumes a whole topic of records, and then sleeps for 23h, letting records (and lag) accumulate?
Sorry, it will be a while before I can release a consumer lag calculator. But I
did write last year a Rust parser for __consumer_offsets
: maybe I should
take a second and release that as a Rust Crate?
A new Terraform provider is available, designed to interact with ZooKeeper ZNodes:
TFZK.
The latest stable version is v1.0.3
, and you should give it a go.
Ah! And here is the doc.
Earlier this year I decided to scratch a long-standing itch: build a Terraform Provider for Apache ZooKeeper. While there was already one, it came with limitations that created issues in production environments:
PERSISTENT_SEQUENTIAL
ZNodesEnter Terraform Provider ZooKeeper (TFZK) (GitHub repo).
From here:
ZooKeeper is a high-performance coordination service for distributed applications. It exposes common services - such as naming, configuration management, synchronization, and group services - in a simple interface so you don’t have to write them from scratch. You can use it off-the-shelf to implement consensus, group management, leader election, and presence protocols.
Good question.
Terraform is a de-facto industry standard for managing small, medium and even large infrastructure in a declarative manner. In a cloud world, Terraform is a tool to bring order to chaos.
Build a large enough infrastructure, and you will need to come up with ways to dynamically distribute configuration to running systems. Say, you need a way to inform your sharded cloud infrastructure of things like:
You could, in theory, provide your topology to every service, via startup-time configuration. But that comes with a non-zero downtime: you need to modify and distribute the configuration, and then restart your services to read it. Not ideal for your uptime.
That’s where a service like ZooKeeper shines: it’s not designed for high throughput or for storing lots of data. But to be a reliable central place to send services to, so they can coordinate and react to larger infrastructure changes.
So, to answer your question, what does ZooKeeper have to do with Terraform? ZooKeeper can be a perfect place to store a live picture of your infrastructure reality and let services rely on it to listen for changes.
ZooKeeper can do a lot of things, but all are built around the core concept of ZNode:
Every node in a ZooKeeper tree is referred to as a znode. Znodes maintain a stat structure that includes version numbers for data changes, acl changes. The stat structure also has timestamps. The version number, together with the timestamp, allows ZooKeeper to validate the cache and coordinate updates. Each time a znode’s data changes, the version number increases. For instance, whenever a client retrieves data, it also receives the version of the data.
TFZK offers a ZNode CRUD:
zookeeper_znode
zookeeper_sequential_znode
zookeeper_znode
Given that a Terraform Provider only “runs” during your plan
s and apply
s,
its focus is on Persistent ZNodes. For this reason, it can’t support
Ephemeral ZNodes, Watchers and other “live” features, that are built
for services that hold a persistent connection:
those are more targeted at runtime services and applications (i.e. your code).
At the time I’m writing this, v1.0.3
is out. Start by adding to your Terraform configuration:
terraform {
required_providers {
zookeeper = {
source = "tfzk/zookeeper"
version = "1.0.3"
}
}
}
provider "zookeeper" {
# Configuration options
}
And then head to the official documentation.
Great, why don’t you help me? I set it up so it should be easy to set up a local dev environment, where you can spin up an ensemble to test against.
There are especially 2 features I’d like to see implemented:
And if you are new to Terraform Provider development, take a look at this amazing tutorial. That’s how I got started.
]]>I’m not sure I fully agree with the idea that poor aesthetic (actually, ugliness) has a dominating role in the pricing bubble. I still believe the the biggest issue is with political choices, that most probably are driven by lobbist interests.
Nevertheless this video is an interesting and watch-worth perspective.
After watching this video I subscribed to “The School of Life” channel. And frankly you should too.
]]>We all know that UK is very much London centered, and this is reflected in the graph above. The government doesn’t shy away from policy that are openly “pro-London” and while there are complaints, as an immigrant since 2007, I can see not much more than some “grumpiness” about it. It’s like Britons are “sort of ok with it”.
To address the housing needs, the Tory government has come up with Help to Buy: you put 5% deposit (say £10k), the government puts 20% (say £40k) and you can now afford a home of up to £200k - you can easily find a bank that would give you a Mortgage for 75% LTV. It’s like asking them for a glass of water: of course they will have you as a customer.
This sounds great, but it’s actually a recipe for disaster.
HTB has been launched in 2 phases in 2013: HTB1 (Equity Loan) and HTB2 (Mortgage Guarantees) first only applied for newly built homes, now HTB can be used also to purchase pre-built homes. But ONLY in the formula HTB2: instead of giving you 20% of the money, the HTB2 just makes the government guarantor of your Mortgage.
All of those, capped to £600k.
Sounds good, right? Not a bit. But first let’s highlight the benefit.
Because we are stupid. Humans rarely think long term. All of a sudden, Bob (fictional character) can afford a much bigger house, and he has to only provide 5% of deposit. Suddenly, life is amazing.
HTB works as a loan: once you buy, you are repaying the government a monthly installment (to define with a HMRC official). First 5 years are interest free (WOW!) and then it’s like 1.5% + some increase over time. It’s a great loan compared to any Mortgage.
If prices of houses were fair, this would be a great thing that would really allow people to buy a home.
In short, people are buying beyond their means. Yes, you have taken 20% of the money with this great cheap loan, so in theory you are saving loads of money. But still, you now have 2 loans: a Mortgage and the HTB loan. Your credit score is affected. You are less “trust worthy” in case you need an emergency loan.
And if that 5% deposit was already hard to put together, you are most certainly now in a situation where ALL your income is taken by the repayments. You can make it, but just a glitch in your monthly INs/OUTs and you are in trouble.
I have made my calculation, so it’s possible to take the HTB and still be comfortably between your means. But as a Software Engineer, my income is way above the national average: I ought to be in this position.
If house prices where a fixed value, this whole HTB thing would be great and Britain would have no housing problems.
But the market laws are well known: flood the market with money, and prices will soar. And that’s what’s happening. House prices are going drastically up because more people can afford them all of a sudden.
That would be “OK” if there wasn’t a party exploiting this.
What we forget to consider, is that Government’s Money is OUR money. Is what every month we pay in taxes. Is the the 20/30/40/50% of our monthly sweat that goes “puff” so that the “Government Machine can keep cranking”. Mind you, I’m not complaining about this: this is what a government does - takes money from X to make Y happen. It’s sort-of an investment logic at play, on a country-scale.
The issue I have is that the HTB is being exploited BIG TIME by big house builders (i.e. not your average tiny builder - we are talking about market-floating companies like Bewley Homes, Barrat / David Wilson, Bloor Homes, Crest Nicholson…) to squeeze as much money as possible out of buyers and government’s pocket. That at the end of the day are the same pocket!
And how they do that? How they maximise their profit? Withholding properties!
Try to go to any site where they have set up a show-room and they are starting to sell the first plots. They will tell you that “only this much are available right now”. You will ask “what’s going to be available in X months?”. They will show you what but prices will not be available because, in X months, the very same house, built 50 meters down the same road, might be able to go on the market for £10/20k extra!
New homes increase the market-demand for homes. Even pre-owned. This is basic economics that any university student has studied at some point. It’s called “Fabricated Demand”. You go down the street, and see signage of “New Luxury Homes” in your neighbourhood. Your mind now wants to buy, but the new homes are too costly. So you buy something else.
The houses on the market reduce in number, so what’s left can ask for more money. The builder knows that the customer can use HTB to “spend that little extra for their dream home”. So they are ready to release houses at the right time. At a price that maximise profit.
To prove that, check the price of homes that are above the HTB £600k threshold: because HTB doesn’t apply to them, their prices can’t move as much. They incur the risk of going “out of market”.
What’s different is that the HTB has injected money that have altered reality. More money means pushing prices beyond the “too expensive” point. The point that houses were supposed to reach, before demand will cease and sellers would be forced to rethink their pricing.
Ah, and consider also that UK has done massive Quantitative Easing (QE): pretty much printing money. This is where the Government gets the extra-edge to support HTB.
But QE is a dangerous thing to over-use: ultimately those extra money in the market will be paid by the tax-payers, through massively inflated prices (i.e. inflation baby!).
Who cares! The builder is happy! Who is in trouble is the customer (that can’t repay the mortgage and loses his/her home) and the Government (that has lost tax-payers money).
In this game, the loser is always you. Unless you are a builder: in that case your third yacht is coming next week.
I’d call this ++a new house bubble++. But in a country where media are very much influenced by government, mentioning bubble would probably get journalists fired…
]]>To cool off, he decides to go up there. On that peak no one will follow him. They were all pretty much exhausted: no sane mind would follow him in a hike at this stage. Good, that would do!
Also, on top of that no stupid idiot would yell at him how stupid was to leave the stupid Egyptians. Yes, slavery was bad, and building palaces, pyramids and the sort was getting boring, but they got something to eat there. A shelter. A mean to survive.
When he is about to reach the top of the mountain, from far, he sees something “sparkly”. A big metal box, looking like made of gold, but was too far to distinguish it’s details. So, he decided to look into it.
Unbelievable! It was a… phonebooth. No keypad on it: just one button. On the button and engraved on it: “Mr God”.
He picks the phone, pushes the button and waits…
Meanwhile, in Heaven.
(Phone ringing)
God> Mmm…
(Phone still ringing)
God> Oh crap! Another idiot calling from Sinai. I don’t have time for this shit today!
Secretary> God, do you want me to take that?
God> Yes, please. Tell him to fuck off.
Secretary> OK!
Secretary (whispering)> Such a tool…
(Secretary answers the phone)
Secretary> Heeeelllo!
Moses> Hi. Is this God?
Secretary> Actually, I’m his secretary. Farrokh.
Moses> Farrokh?
Secretary> Yes, Farrokh. Farrokh Bulsara.
Moses> You are a man! I would assumed God’s secretary to be a hot girl
Secretary> You haven’t seen me, my dear!
(Awkward silence)
Secretary> So, how can I help?
Moses> May I speak with Mr God, please.
Secretary> God is quite busy at the moment. He is working on his latest play: “Landing in America”.
Moses> Mmm… OK. Whatever. Can I speak with him?
Secretary> He is actually very busy. He hates being interrupted when he is testing new religious scenarios.
Moses> It will only take a minute!
Secretary> Mmm… wait please.
(Secretary puts his hand on the phone and starts whispering to God)
Secretary> He insists. He sounds quite like a rude guy. Do you want me to put him through?
God> FUCK!!! I’m never gonna finish this shit if I get interrupted all the time!
God> OK. Let the fucker through. But fucking cut the line if I’m not done in 2 minutes. Those South Americans can’t be kept waiting. They already started building all sort of crazy shit to substitute me and I got to crash that pretty soon…
(God coughs and deepens his voice)
God> Hello child!
Moses> Hello God. How have you been?
God> Moses! It’s been a while. Did you like the water trick?
Moses> Impressive. For a second I thought you won’t be paying attention and I’d have looked like a tool, standing there with my harms in the air, and nothing actually happening.
God> Glad to be helpful.
(God picking his nose while talking)
God> OK. It was nice to speak with you. I shall…
Moses> Wait. I called to ask for your help. Those people down there, they are driving me crazy!
Moses> One of them just started melting gold and he wants to make a cow! Or a Lamb, I don’t know.
God> Yes, I have seen it - I won’t worry too much. You will probably be all dead pretty soon…
Moses> But they want to worship it!!!
God> Stupid dick-heads! That’s why I’m giving up on y’all!
Moses> “Y’all”!?!?!
God> Yeah. I started to pick up some of that American slang you know…
Moses> No please. I need your help! Don’t you have an advice or a message I could bring to them?
God> Shit. I’m tired. It has been a long day. Can’t you call again tomorrow?
Moses> Come on! You are usually so patronising? For once that I’m actually asking for it?
God> Mmmm… OK. Have you got a pen?
Moses> No. Left it in the other trousers. I got a couple of stone slabs though…
God> That would do!
Moses> OK. Go ahead!
God> First Commandment: NEVER CALL GOD IN VANE! He is a cranky old man and he can’t really hear you anyway. He is busy having fun torturing minions all the time and no interest in helping them. So, leave him alone. Unless you are a hot gay version of David Jude Heyworth Law.
Moses> Oh. OK. That’s it?
God> Of course not, fuck face. Here is number Two.
(Click. Phone disconnects.)
Moses> God? GOD? Holy shit, this guy! He is impossible.
(Moses thinks a bit and then says…)
Moses> Well, once again, I suppose I’ll have to just make stuff up.
(Meanwhile)
God> Moses? MOSES? Whatever…
Secretary> I cut the line. As you asked.
God> Oh! It was YOU then. Well done. You are my saviour, as usual.
Secretary> My pleasure Brian. Please go back to your plays now: those plagues are not going to make themselves.
]]>1.9.6, released not more than 2 weeks ago (1 week?), was a coordination went wrong and I take part of the responsibility for it. Just discard that release.
So, for a few days I was helping with testing and refining the
new cookiejar
module (#11535)
for PhantomJS that Joseph Rollinson (jtrollinson)
contributed.
I’m very interested in this module because it allows to instantiate multiple
Cookie Jar objects instead of having all the WebPage object use the same jar.
Such feature would allow GhostDriver to finally support
Session Isolation (#170), a long
overdue feature.
I had just released GhostDriver 1.1.0, and so I was pretty much all setup to do an extra release. Having that feature supported would make PhantomJS/GhostDriver play better with Selenium Grid, allowing to register more than 1 session against one browser process instance.
So, once the cookijar
thing was merged, I cut
GhostDriver 1.1.1 “Okiku”
and promptly made
a PR against PhantomJS to merge that in.
Ariya was so kind to wait for me to do that and cut a minor release of
PhantomJS: in his intention just bugfixes and latest GhostDriver.
Little he knew that latest GhostDriver depended on the new cookiejar
feature.
So, when he prepared the release branch (by git cherry-pick
-ing), left out
the cookiejar
module, but included GhostDriver 1.1.1.
Result? KA-BOOM!
Next time I decide to add support to a major feature (even if it’s not in my direct project but the one I’m based upon), I should increase the minor version number or even the major (depending on the case).
If I had done that, Ariya would have known that it wasn’t just a minor fix in GhostDriver, and this mess would have been avoided.
Happy Ghosting!
]]>So, GhostDriver 1.1.0, codename “Banquo”. This time the codename was picked by my wife - fitting, given how important she has become this year for Leonardo and Me.
I know, I keep going off topic. Sorry…
As I’m writing, this new version hasn’t been imported into PhantomJS yet, but it will be very soon. Definitely before next release of PhantomJS. And please, don’t ask me about that - there is the PhantomJS Google Group for this kind of questions (I have just opened a discussion about this).
For a complete, up-to-date list of changes in the releases of
GhostDriver, please DO take a look at the
CHANGELOG
.
Here is a cut&paste for 1.1.0:
/maximize
window will set the window size to 1336x768,
currently most common resolution online (see statcounter)As you might have noticed, I have highlighted 2 entries above…
One difficulty of working with an headless browser is: there is no direct way to look at the browser and try to understand what it’s doing. It’s pretty much a black-box and debugging your tests against it might be hard.
Logging was already implemented, but the only way to grab the output was via standard output/error redirect. Not always suitable for the client-server architecture of WebDriver.
The WebDriver WireProtocol defines a
set of API to access LOGs
of different type.
The logtype client
is implemented by the binding. The driver
should
provide a view into the inner guts of the WebDriver implementation. The
browser
essentially is the console of the browser. The server
… I’m
not sure.
In GhostDriver 1.1.0 we have added support for 2: browser
and har
.
HAR??? Yes, HAR - HTTP Archive.
While I explained already the browser log-type, har
will return a single-entry
log, with the HAR of the current webpage, since the first load (it’s cleared
at every unload
event).
Support for the other logtypes might come in the future (the driver
type should
be simple enough).
A great THANK YOU goes to Dmitry Balabka (torinaki) and Wouter Groeneveld (wgroeneveld) for their key contributions. Keep it coming guys!
This feature that has been requested SO MANY FREAKING TIMES!!!
PhantomJS has a rich set of API to control and tune the internal WebKit core. Access to the plethora of API that PhantomJS has was always high, since the beginning of the GhostDriver project.
While some see GhostDriver/PhantomJS as just a WebDriver implementation against a non-production browser, others see this as a way to easily control PhantomJS from another software. The issue was that the WebDriver Protocol is tuned for a specific scenario: emulate user interaction with a browser. PhantomJS is a bit less of a browser, and more of a scriptable browser engine. And it’s very useful in scenarios also not strictly related to testing and user interaction.
This API allows to send a string of JavaScript, written for PhantomJS, and be
interpreted within the context of a WebDriver Page. In other words, for the given
script the this
variable is initialized to be the current Page.
The format of this WebDriver Protocol Extension:
HTTP POST /session/:id/phantom/execute
{
"script" : SCRIPT_SCRIPT
"args" : [ARRAY_OF_JS_BASIC_TYPES]
}
To see an example use of this API, check out this example.
This new API was entirely developed by Mark Watson (watsonmw), ex-colleague of mine at Neustar and awesome chap.
As I write this, I still have to finish up a couple of tasks to consider 1.1.0 fully released:
I could have waited for those tasks to be complete… but I felt like this was overdue and I wanted to share a status report as soon as I could.
Also, it gives me closure for this blog post.
]]>.avi
file and an .srt
file.
I had just started watching movies in foreign language and during the week I had stumbled upon Old Boy. I wasn’t really in the mode of staying up late, but something made me.
The title was in English: how was I supposed to know it was a movie from South Korea? I know, I was very naïve.
I remember being captured very quickly by the story. It was intriguing, cruel, nightmare-ish. Just the kind of stuff that tingles me. The main actor, Min-sik Choi, looked a lot like Jackie Chan, but much better at acting.
I just finished watching it again. Now I got a .mkv
, and the subtitles are a
track in it, but the greatness of this movie comes back to mind.
Luckly I had almost entirely forgotten the story, so I have enjoyed it all over again. But it’s funny: 10 years later you see new details - you notice new scenes, new details you had missed, new characters even. I had totally forgot about the way he gets released and the first guy with the dog that he meets once he is back in the world.
Same exact movie: what is changed is me. In 10 years SO MUCH has changed. 10 years ago I had not met yet my lovely wife. I knew I wanted to leave the limiting life of the south of Italy, but I didn’t know I could actually do it. I didn’t know that the next time me and this movie would meet, I’d be living a completely new life. With a beautiful wife and a beautiful son - my own little family that is just starting.
Every New Year’s Eve I look back at the year, make a mental list of what changed, “where am I now”. I enjoy thinking that, next year, I’ll be somewhere else again with my life. How exciting!
Maybe in 10 years, 2023, I should find again this long lost friend of a movie, and watch it again. What the hell! I will definitely do that. Let’s just hope I don’t change too much and forget to do it.
Maybe in 10 years, my son will join me. Well, he will be 10 and this movie will be probably too “gruesome” for him - let’s say he will have to wait 20 years instead…
Time. You bastard. How do you go so fast?
Yes, I know that Spike Lee’s “new” Oldboy has just been released, but that’s an almost total coincidence. I’m going to eventually watch this USA edition, but I don’t expect it to make me happy - remakes are usually badly executed. Still, the actors involved are very good so I don’t expect to be disappointed either.
]]>Many things have happened in the recent months: we got married; I helped my lovely wife with the pregnancy (well, she did all the hard work); and, on the 7th of November, we have became parents of this beauty:
Now, we have to slowly learn to live our new life together: while we learn how to be parents, Leonardo will be learning how to be a baby. What now looks like a difficult, consuming and intense task (ask me how much we sleep at night!), will at some point become “daily routine”. A sweeter routine with Leo at the center of it all!
It is a full RESET to which we need to SETTLE. A RESETTLE.
A new life, with many unknowns to uncover, but also the will and need to be as united and as strong - actually, even more. But love seems to be our great strength, and Alessandra and I are quite good at teamwork.
Well, as EVERYONE warned us, sleeping with a newborn baby is like trying to play retro videogames: you ALWAYS want to do, but never manage to.
Leonardo is a very unstable sleeper so far, and he seems to like to be more active during the night: maybe a good skill if he decides to join me in the craft of programming, but for now it’s just an energy drain (for us!). A drain that transforms into a sweet smile once you see his eyes looking at you. You are tired, you are exhausted but, somehow, his puffy cheeks make everything perfect.
One thing that seems to work is carrying him around, while I rock him and tell him all sort of stuff, just from the top of my head. I think I have already talked him into becoming a software developer: he hasn’t disagreed, so I assume he is OK with it :-P .
This is close to 0
for now. Either as a couple, or on our own, time to do
stuff by yourself seems a very hard thing to achieve. But I’m sure it will get
better: myself I’m still a night-worker, so with his sleeping becoming more
regular, working a bit on opensource or playing videogames will be
eventually possible.
Talking of having no time to do stuff, while it seemed quite impossible, I have indeed managed to complete the blog migration! It took a while to do the different bits, and I still have stuff to work on, but I’m now serving this from GitHub Pages using Jekyll to generate the static HTML/CSS/JS. Much happier: I can move to any other hosting in a matter of hours. I don’t even know why I was bothering to use non-static content for blogging/personal site.
If interested, the repository with all the source code (posts + Jekyll plugins) can be found at my detro/detro.github.io “source” branch.
Migrating the blog means also producing more content, right? Well, this is a first attempt. Not a great one, I admit - I find quite difficult to come up with things to say, when my brain it’s affected by sleep deprivation. But I have a TODO list on my whiteboard of things I want to talk about:
I have been quite far from Selenium, GhostDriver and PhantomJS. Partially is my fault (I did spend some time playing GTA V before Leonardo was here), but mostly is because of all the above.
While I don’t really have a fully up-to-date picture of what’s happening with PhantomJS, I know that Vitallium is working on the porting to Qt5. That is a very exciting perspective, and the project looks like it has a brilliant future ahead.
Also, the Qt-Project is moving Chromium/Blink: this means that PhantomJS architecture will have to change sensibly, the core will be based on a more commercial browser and, most importantly, I believe a full API compatibility break would be a given. Why is that good? Because we could apply all we learned from developing PhantomJS so far, and do a better job in many areas.
GhostDriver has received some love (aka Pull Requests). I have worked to
integrate them into an upcoming 1.1.0
, but I think there are other more
pressing improvements to do, before considering a release.
And, if you really want to, you can still check out the master
branch
and run it inside latest PhantomJS. So, I don’t feel too much pressure to
package a full release.
That’s all for now.
P.S. I still haven’t told you what’s my next job, have I? :)
]]>