Some 20 months ago we decided to turn salaries at Lunar Logic transparent. We documented the process: why, when and how we did that. The most interesting part of the story, though, is how it all played out. Obviously, we couldn’t have known it all up front.

The core of our approach to transparent salaries is that it’s not only about transparency but also about control. When we made our payroll transparent we also introduced a collaborative method to change our salaries, a.k.a. give ourselves raises.

Let me start by sharing a few facts that show what happened since the change. We’ve had 43 salary discussions, but almost a half of them (21) were automatically triggered. The latter happens when someone joins Lunar and we need to set their salary, or when a probation period is about to finish, or we offer employment to our interns.

Interestingly enough, there has been only one occasion when someone proposed a raise for themselves. All other threads were started for someone else.

Participation in salary discussions has been healthy. It is a very rare case when less than one-third of the company speaks up. Before you think it’s hell lot of discussion remember, we are a small organization. Right, now there are 25 of us. It still means, that typically we can expect 8-12 people to share their views on a proposed raise.

These are dry facts, though. The most interesting thing is how our attitude and behaviors evolved through that time.

When I set myself to write this article I started with reading through the original posts about the change, which I linked at the very beginning. What struck me while reading the old pieces was how weird it feels to read how much of “I” was in the story. Understandably so. After all, it was mostly my initiative and facilitation to drive the change. However, by now it’s not “my” process anymore. It’s ours. Anything that happens with it is because of “us”, not “me”. Thus the weirdness.

This means, that salary process is simply one of the things that we use naturally, and it isn’t perceived as a change that’s been imposed on us. In fact, when we were summarizing the year 2015 many of us mentioned that making salaries transparent was a major achievement. Despite the initial fears of some, we’re doing great. Two years ago, I made a remark that “transparent salaries, once in place aren’t much of a problem.” It seems I nailed it.

We obviously made mistakes. After initial reluctance to use the new tool, there has been a time which we call “raise spree”. We’ve been discussing multiple raises at the same time and getting pretty damn generous. That triggered discussions about the general financial situation of the company and about consequences of different types of decisions. As a result, we raised our awareness and got more careful with raises.

A girl cartoon gnome struggles with the weight of a giant yellow gemstone.

We’ve had our disputes how we speak up in salary threads. We started with a premise that we want to be respectful. That’s not enough, though. Sometimes we may be respectful, factual and correct even, but it doesn’t make a useful argument for a raise. A simple fact that I’m good at, say, sailing doesn’t create an instant value to the organization.

Probably the most difficult lesson we learned was when during four months we gave one of our developers a raise and then we let him go. In both discussions, we had collective agreement what we wanted to do. Clearly, we made a big mistake either with one or with the other. The plus side is that we learned a ton.

The process itself also evolved. What was initially designed as a process to change existing salaries was adopted to decide salaries for new hires. Then we started using it to decide whether we want to offer a job after an internship. We introduced a deadline for the end of discussions to provide a constraint how much time there is to speak up. Some heuristics have been developed to guide us through the final decision making. My favorite one is about options. When voices are distributed across few different salary levels we typically go with the lowest as it provides us with most options for future. We can always start another salary thread for that person soon (and it happened a couple times), while it wouldn’t work the other way around.

The meta-outcome, which is something that we initially aimed for, is there too. People are getting more involved in running the company and understanding the big picture. They are becoming more and more autonomous in their decisions, even when significant money is involved. I think it is a fair statement that our payroll has become fairer too.

I’m also happy about change for one selfish reason. I never learned to like, or even have neutral feelings towards, discussions about raises with people from my teams. Several hundred of these discussions definitely increased my skill at them, but my attitude didn’t really get better. And suddenly, I’m not one of the two parties in negotiations. If I perceive myself a party, I’m one of twenty-five. And not any more important than either of the rest. I guess, for almost every manager out there, it would be the same as it was for me: a huge relief. And we got better outcomes too. That’s a double win.

However, absolutely the best emergent behavior that was triggered by open salaries is how we share feedback with each other. The pattern is simple enough that it should have been obvious, yet I had no idea.

Boy and girl cartoon gnomes discuss how to divide up their salary of gemstones.

When we start a salary thread for someone and I have an opinion I will share it soon (typically a deadline for speaking up is around a week). However, to keep it respectful, before I write down my opinion in a discussion thread I will go talk to the person who is about to get a raise to share my feedback. After all, I don’t want them to be surprised, especially whenever I have some critique to offer. Suddenly, whenever we’re discussing somebody’s salary that person gets a ton of feedback.

That’s not all, though. If I have a critique to offer about something that is a few months old, I can hear in return something along the lines of “Hey, I wasn’t aware of that. Why didn’t you tell me earlier? I could have worked on that.” Now, I don’t know when we’ll be discussing a raise for that person, as anyone can start a salary thread at anytime. This means that I’m actually incentivized to share feedback instantly.

That’s exactly what we’d love to achieve. And that’s exactly what we started doing to a huge extent. Despite the fact that for long, long time at Lunar we were definitely above average when it came to sharing feedback, I wasn’t happy. I wanted to see more peer-to-peer feedback. Despite different experiment I wasn’t happy until we changed how we manage our payroll.

This is the best part of having transparent salaries. In retrospect, I’d go for open salaries purely for that reason: much more high quality peer-to-peer feedback.

By now barely anyone could imagine, let alone change back to, Lunar Logic without transparent salaries. Even if the transition was a tad bit tricky it paid off big time.

In our line of business estimating software projects is our bread and butter. Sometimes it’s the first thing our potential clients ask for. Sometimes we have already finished a product discovery workshop before we talk about it. Sometimes it is a recurring task in projects we run.

Either way, the goal is always the same. A client wants to know how much time building a project or a feature set is going to take and how costly it will be. It is something that heavily affects all planning activities, especially for new products, or even defines the feasibility of a project.

In short, I don’t want to discuss whether we need to estimate. I want to offer an argument how we do it and why. Let me, however, start with how we don’t do estimation.

Expert Guess

The most common pattern that we see when it comes to estimation is the expert guess. We ask people who would be doing work how long a task will take. This pattern is used when we ask people about hours or days, but it is also in work when we use story points or T-shirt sizes.

After all, saying a task will take 8 hours is an uncertain assessment as much as saying that it is 3 story point task or it is an S size. The only difference is the scale we are using.

The key word here is uncertainty. We make our expert guesses in the area of huge uncertainty. When I offer that argument in discussions with our clients typically the visceral reaction is “let’s add some details to the scope of work so we understand tasks better.”

Interestingly, making more information available to estimators doesn’t improve the quality of estimates, even if it improves the confidence of estimators. In other words, a belief that adding more details to the scope makes an estimate better is a myth. The only outcome is that we feel more certain about the estimate even if it is of equal or worse quality.

The same observation is true when the strategy is to split the scope into finer-grained tasks. It is, in a way, adding more information. After all, to scope finer-grained tasks out we need to make more assumptions. If not for anything else we do that to define boundaries between smaller chunks. Most likely we wouldn’t stop there but also attempt to keep the level of details we’ve had in the original tasks, which means even more new assumptions.

Another point that I often hear in this context is that experience in estimating helps significantly in providing better assessments. The planning fallacy described by Roger Buehler shows that this assumption is not true either. It also pinpoints that having a lot of expertise in the domain doesn’t help nearly as much as we would expect.

Dan Kahneman in his profound book Thinking Fast and Slow argues that awareness of our flaws in the thinking process doesn’t impregnate us from falling into the same trap over again. It means that even if we are aware of our cognitive biases we are still vulnerable to them when making a decision. By the same token, simple awareness that expert guess as an estimation technique failed us many times before and knowledge why it was so, doesn’t help us to improve our estimation skills.

That’s why we avoid expert guesses as a way to estimate work. We use the technique on rare occasions when we don’t have any relevant historical data to compare. Even then we tend to do it at a very coarse-grained level, e.g. asking ourselves how much we think the whole project would take, as opposed to assessing individual features.

Ultimately, if expert guess-based estimation doesn’t provide valuable information there’s no point in spending time doing it. And we are talking about activities that can take as much as a few days of work for a team each time we do it. That time might have been used to actually build something instead.

Story Points

While I think of expert guesses as a general pattern, one of its implementations–story point estimation–deserves a special comment. There are two reasons for that. One is that the technique is widely-spread. Another is that there seems to be a big misconception of how much value story points provide.

The initial observation behind introducing story points as an estimation scale is that people are fairly good when it comes to comparing the size of tasks even if they fail to figure out how much time each of the tasks would take exactly. Thanks to that, we could use an artificial scale to say that one thing is bigger than the other, etc. Later on, we can figure out how many points we can accomplish in a cadence (or a time box, sprint, iteration, etc., which are specific implementations of cadences).

The thing is that it is not the size of tasks but flow efficiency that is a crucial parameter that defines the pace of work.

For each task that is being worked on we can distinguish between work time and wait time. Work time is when someone actively works on a task. Wait time is when a task waits for someone to pick it up. For example, a typical task would wait between coding and code review, code review and testing, and so on and so forth. However, that is not all. Even if a task is assigned to someone it doesn’t mean that it is being worked on. Think of a situation when a developer has 4 tasks assigned. Do they work on all of them at the same time? No. Most likely one task is active and the other three are waiting.

development team flow efficiency

The important part about flow efficiency is that, in the vast majority of cases, wait times outweigh work time heavily. Flow efficiency of 20% is considered normal. This means that a task waits 4 times as much as it’s being worked on. Flow efficiency as low as 5% is not considered rare. It translates to wait time being 20 times longer than work time.

With low flow efficiency, doubling the size of a task will contribute only to a marginal change in the total time that the task spends in the workflow (lead time). With 15% flow efficiency and doubling the size of the task lead time would be only 15% longer than initially. Tripling the size of the task would only result in a lead time that is 30% longer. Let me rephrase: we just increased the size of the task three times and the impact on lead time is less than one third of what we had initially.

estimation low flow efficiency

Note, I go by the assumption that increasing the size of a task wouldn’t result in increased wait time. Rarely, if ever, such an assumption would hold true.

This observation would lead to a conclusion that investing time into any sizing activity, be it a story point estimation or T-shirt sizing, is not time well-invested. It has been, as a matter of fact, confirmed by the research run by Larry Maccherone, who gathered data from ten thousand agile teams. One of the findings that Larry reported was that velocity (story points completed in a time box) is not any better than throughput (a number of stories completed in a time frame).

In other words, we don’t need to worry about the size of tasks, stories or features. It is enough to know the total number of them and that’s all we need to understand how much work there is to be done.

The same experience is frequently reported by practitioners, and here’s one example.

If there is value in any sizing exercise, be it planning poker or anything else, it is in two cases. We either realize that a task is simply too big when compared to others or we have no clue what a task is all about.

We see this a lot when we join our clients in more formalized approaches to sizing. If there is any signal that we get from the exercise it is when the biggest size that is in use is used (“too big”) or a team can’t tell, even roughly, what the size would be (“no clue”). That’s by the way what inspired these estimation cards.

Historical Data

If we avoid expert guesses as an estimation strategy what other options do we have? There is a post on how approaches to estimation evolved in the agile world and I don’t want to repeat it here in full.

We can take a brief look, however, at the options we have. The approaches that are available basically fall into two camps. One is based on expert guess and I focused on that part in the sections above. The other one is based on historical data.

Why do we believe the latter is superior? As we already established we, as humans, are not well-suited to estimate. Even when we are aware that things went wrong in the past we tend to assume optimistic scenarios for the future. We forget about all the screw-ups we fought, all the rework we did, and all the issues we encountered. We also tend to think of ideal hours, despite the fact that we don’t spend 8 hours a day at our desks. We attend meetings, have coffee breaks, play fussball matches, and chat with colleagues. Historical data remembers it all since all these things affect lead times, and throughput.

Lead time for a finished task would also include the additional day when we fought a test server malfunction, a bank holiday that happened at that time, and unexpected integration issue we found when working on a task. We would be lucky if our memory remembered one of these facts.

By the way, I had the opportunity to measure what we call active work time in a bunch of different teams in different organizations. We defined active work time as time actively spent on doing work that moves tasks from a visual board towards completion when compared to the whole time of availability of team members. For example, we wouldn’t count general meetings as active work time but a discussion about a feature would fall into this category. To stick with the context of this article, we wouldn’t count estimation as active work time.

Almost universally I was getting active work time per team in a range of 30%-40%. This shows how far from the ideal 8-hour long workday we really are despite our perceptions. And it’s not the fact that these teams were mediocre. Conversely, many of them were considered top performing teams in their organizations.

Again, looking at historical lead times for tasks we would have the fact that we’re not actively working 8 hours a day taken care of. The best part is that we don’t even need to know what our active work time is.

Throughput

The simplest way of exploiting historical data is looking at throughput. In a similar manner we account for velocity we may get the data about throughput in consecutive time boxes. Once we have a few data points we may provide a fairly certain forecast what can happen within the next time box.

Let’s say that in 5 consecutive iterations there has been 8, 5, 11, 6 and 14 stories delivered respectively. On one hand, we know that we have a range of possible throughput values at least as wide as 5 to 14. However, we can also say that there’s 83% probability that in the next sprint we will finish at least 5 stories (in this presentation you can find the full argument why). We are now talking about a fairly high probability.

estimation probability - 83% chance that the next sample falls into this range

And we had only five data points. The more we have the better we get with our predictions. Let’s assume that in the next two time boxes we finished 2 and 8 stories respectively. Pretty bad result, isn’t it? However, if we’re happy with the confidence level of our estimate around 80% we would again say that in the next iteration we would most likely finish at least 5 stories (this time there’s 75% probability). It is true, despite the fact that we’ve had a couple pretty unproductive iterations.

new estimation probability - 75% chance that the sample falls into new range

Note, in this example I ignore completely the size of the tasks. One part of the argument why is provided above. Another part is that we can fairly safely assume that tasks of different sizes would be distributed across different time boxes so we are actually invisibly taking size into consideration.

The best part is that we don’t even need to know the exact impact of the size of a task on its lead time and, as a result, throughput. Yet again, it is taken care of.

Delivery Rate

Another neat way of using historical data is delivery rate, which is based on the idea of takt time. In manufacturing takt time presents how frequently manufacturing of an item is started (or finished). Using it we can figure out throughput of a production line.

In software development, workflow is not as predictable and stable as in manufacturing. Thus, when I talk about delivery rate, I talk about average numbers. Simply put, in a stable context, i.e. stable team setup, in the longer time frame we divide the time (number of days) by a number of delivered features. The answer we’d get would be how frequently, on average, we deliver new features.

We can track different time boxes, e.g. iterations, different projects, etc., to gather more data points for analysis. Ideally, we would have a distribution of possible delivery rates in different team lineups.

Now, to assess how much time a project would take all we need is a couple of assumptions: how many features we would eventually build and what team would work on the project. Then we can look at the distribution of delivery rates for projects built by similar teams, pick data points for the optimistic and pessimistic boundaries and multiply it by the number of features.

Here’s a real example from Lunar Logic. For a specific team setup, we had delivery rate between 1.1 and 1.65 days. It means that a project which we think will have 40-50 features would take between 44 (1.1 x 40) and 83 (1.65 x 50) days.

Probabilistic Simulation

The last approach described above, technically speaking, is oversimplified, incorrect even, from a mathematical perspective. The reason is that we can’t use averages if the data doesn’t follow normal distribution. However, from our experience the outcomes it produces, even if not mathematically correct, are of quality high enough. After all with estimation we don’t aim to be perfect; we just want to be significantly better than what expert guesses provide.

By the same token, if we use a simplified version of throughput-based approach and just go with an average throughput to assess the project the computation wouldn’t be mathematically correct either. Yet still, it would most likely be better than expert guesses.

We can improve both methods and make them mathematically correct at the same time. The technique we would use for that is the Monte Carlo simulation. Put simply, it means that we randomly choose one data point from the pool of available samples and assume it will happen again in a project we are trying to assess.

Then we run thousands and thousands of such simulations and we get a distribution of possible outcomes. Let me explain it basing on the example with throughput we’ve used before.

Historically, we had a throughput of 8, 5, 11, 6 and 14. We still have 30 stories to finish. We randomly pick one of data samples. Let’s say it was 11. Then we do it again. We keep picking until the sum of picked throughputs reach 30 (as this is how much work is left to be done). The next picks are 5, 5 and 14. At this time we stop a single run of simulation assessing that remaining work requires almost 4 iterations more.

software project forecast burn up chart

It is easy to understand when we look at the outcome of the run in a burn-up chart. It neatly shows that it is, indeed, a simulation of what can really happen in future.

Now we run such a simulation, say, ten thousand times. And we get a distribution of the results between a little bit more than 2 iterations (the most optimistic boundary) up to 6 iterations (the most pessimistic boundary). By the way, both extremes are highly unlikely. Looking at the whole distribution we can find an estimate for any confidence level we want.

software project forecast delivery range

We can adopt the same approach and improve delivery rate technique. This time we would use a different randomly picked historical delivery rate for each story we assess.

Oh, and I know that “Monte Carlo method” sounds scary, but the whole computation can be done in excel sheet with super basic technical skills. There’s no black magic here whatsoever.

Statistical Forecasting

Since we have already reached the point when we know how to employ the Monte Carlo simulation we can improve the technique further. Instead of using oversimplified measures, such as throughput or delivery rate, we can run a more comprehensive simulation. This time, we are going to need lead times (how much time has elapsed since we started a task till we finished it) and Work in Progress (how many ongoing tasks we’ve had during a day) for each day.

The simulation is somewhat more complex this time as we look at two dimensions: how many tasks are worked on each day and how many days each of those tasks takes to complete. The mechanism, though, is exactly the same. We randomly choose values out of historical data samples and run the simulation thousands and thousands of times.

At the end, we land with a distribution of possible futures and for each confidence level we want we can get a date when the work should be completed.

estimating distribution

The description I’ve provided here is a super simple version of what you can find in the original Troy Magennis’ work. For doing that kind of simulation one may need to employ support of software tools.

As a matter of fact, we have an early version of a tool developed at Lunar Logic that helps us to deal with statistical forecasting. Projectr (as this is the name of the app) can be fed with anonymized historical data points and the number of features and it produces a range of forecasts for different confidence levels.

To make things as simple as possible we only need start and finish dates for each task we feed Projectr with. This is in perfect alignment with my argument above that size of tasks in the vast majority of cases is negligible.

Anyway, anyone can try it out and we are happy to guide you through your experiments with Projectr since the quality of data you feed the app with is crucial.

Estimation at Lunar Logic

I already provided you with plenty of options how estimation may be approached. However, I started with a premise of sharing how we do that at Lunar Logic. There have been hints here and there in the article, but the following is a comprehensive summary.

There are two general cases when we get asked about estimates. First, when we are in the middle of a project and need to figure out how much time another batch of work or the remaining work is going to take. Second, where we need to assess a completely new endeavor so that a client can get some insight about budgetary and timing constraints.

The first case is a no-brainer for us. In this case, we have relevant historical data points in the context that interests us (same project, same team, same type of tasks, etc.). We simply use statistical forecasting and feed the simulation with the data from the same project. In fact, in this scenario we also typically have pretty good insight into how firm our assumptions about the remaining scope of work are. In other words, we can fairly confidently tell how many features, stories or tasks there is to be done.

The outcome is a set of dates along with confidence levels. We would normally use a range of confidence levels 50% (half the time we should be good) to 90% (9 times out of 10 we should be good). The dates that match the confidence levels of 50% and 90% serve as our time estimate. That’s all.

estimating range

The second case is trickier. In this case, we first need to make an assumption about the number of features that constitutes the scope of a project. Sometimes we get that specified from a client. Nevertheless, our preferred way of doing this is to go through what we call a discovery workshop. One of the outcomes of such a workshop is a list of features of a granularity that is common for our projects. This is the initial scope of work and the subject for estimation.

Once we have that, we need to make an assessment about the team setup. After all, a team of 5 developers supported with a full-time designer and a full-time tester would have different pace that a team of 2 developers, a part-time designer and a part-time tester. Note: it doesn’t have to be the exact team setup that will end up working on the project but ideally it is as close to that as possible.

When we have made explicit assumptions about team setup and a number of features then we look for past projects that had a similar team setup and roughly the same granularity of features. We use the data points from these projects to feed the statistical forecasting machinery.

Note: I do mention multiple projects as we would run the simulation against different sets of data. This would yield a broader range of estimated dates. The most optimistic end would refer to 50% confidence level in the fastest project we used in the simulation. The most pessimistic end would refer to 90% confidence level in the slowest project we used in the simulation.

In this case, we still face a lot of uncertainty, as the most fragile part of the process are the assumptions about the eventual scope, i.e. how many features we would end up building.

software project forecasting two distributions

In both cases, we use statistical forecasting as the main method of estimation. Why would I care to describe all other approaches then? Well, we do have them in our toolbox and use them too, although not that frequently.

We sometimes use a simple assessment using delivery rate (without the Monte Carlo simulation) as a sanity check whether the outcomes of our statistical forecast aren’t off the chart. On occasions we even retreat back to expert guess, especially in projects that are experimental.

One example would be a project in a completely new technology. In this kind of situation the amount of technical research and discovery would be significant enough that would make forecasting unreliable. However, even on such occasions we avoid sizing or making individual estimates for each task. We try to very roughly assess the size of the whole project.

We use a simple scale for that: can it be accomplished in hours, days, weeks, months, quarters or years? We don’t aim to answer “how many weeks” but rather figure out what order of magnitude we are talking about. After all, in a situation like that we face a huge amount of uncertainty so making a precise estimate would only mean that we are fooling ourselves.

This is it. If you went through the whole article you know exactly what you can expect from us when you ask us for an estimate. You also know why there is no simple answer to a question about estimation.

Our brains work in weird ways. Sometimes you struggle to think of anything, you sit there looking at the blank computer screen for hours, unable to make something look good. Never mind whether you are a designer or developer, you have trouble to put pieces together so the website behaves the way you want. And then, there are the times when you just look at something (that doesn’t even have to be connected to the web!) and a great idea just strikes. It can happen during the night, on a commute, at your friend’s wedding or travelling through Asia. For me, it came when I was looking up the time on a phone at night. My phone’s wallpaper depicts the Northern Lights. It is beautiful, I‘ve been using it for at least two years now. But this time, in the middle of the night, it struck me how awesome it would be if it were animated. Or better yet… to have wallpaper like that on my computer… or maybe a website with a background like it that moves too? I wrote the idea down and fell asleep.

An Idea Revisited

At Lunar we have something that is called Slack Time. It’s the time between projects and you can do whatever you want. Literally! You can read a book, master a new programming language, help someone with their problem or even do nothing (but that’s a waste of time, isn’t it?). I happen to be on slack at the moment, I had just finished the tasks in one project, waiting for the other one to start. The conditions for creative tasks are perfect because World Youth Days is on in Kraków and our office is deserted. I decided to play with the background idea and see what I could come up with. The outcome is a collection of animated gradient backgrounds for the web, all inspired by the night skies. In the next paragraph, I’ll explain how I did it.

The Northern Lights Code

I started with a full page that consisted of nothing but a gradient background done with CSS3 linear gradients. It looked nice, but it was not what I was aiming for. I needed to have it moving in a very delicate, almost invisible way. You might remember my previous blog post about the FLIP technique and a performance of the animations. You can’t just animate the background image and the gradient properties. It is slow, the animation is not smooth and there is a jank. I tried to animate it anyway, just to see the results in Chrome FPS meter. The animation moved with inconsistent 2-55 FPS. Not good enough. I needed to approach it differently. It was not a long search because you don’t have many options if you want an animation that performs well (FYI, you should only animate the opacity and transform properties). So I started playing with rotating and translating my gradient’s position to achieve a sense of a delicate movement. That was the way to go! I added an animation that sways the container. But there was one problem: with the gradient the whole container was moving, it was very annoying because the browser’s scrollbars would jump in and out of the page. The good thing was that it was easily solved by setting up an outer container with its overflow property set to ‘hidden’. It can be any size really, I chose it to span across the whole viewport. One thing to remember was to make the gradient container much bigger so that it wouldn’t show white space at the corners while moving. To have it twice as big as the container felt reasonable.

A gradent container restricted by a smaller container with overflow: hidden;

Take look at the code:

Auroral background on a gif

Starry night

The effect felt really mesmerising. But it still lacked something that my iPhone wallpaper had: a tonne of small white dots – stars. Of course, I didn’t want to add 100 elements to the DOM, it would be a killer for the website performance. I decided to use a one small div that is 1px wide and tall, and “copy” it as many times as I wanted thanks to the box shadows and absolute positioning. There is nothing more helpful than a Sass functions for that, just take a look:

And the effect:

Auroral CSS gradient with starry dots

The coolest thing about this is that you can choose the amount of stars that suits you and that every time you compile your Sass file the stars will be placed somewhere else due to the random() function. :)

Summary

I hope that you enjoyed the article. If you like the backgrounds, remember to give the repository a star on GitHub. I also enjoy seeing pull request (or even Issues), so please help me make the library better. You can also follow me on Twitter or Snapchat to be the first to find out about the improvements to Auroral and all the new things I come up in the future.

Open-SalariesThere are things that we get used to very quickly and then we can hardly imagine going back to a previous state. One such thing for me, in a professional context, is transparency. My default attitude for years was to aim for more transparency than I encountered when joining an organization. I didn’t put much thought into that, though.

Things changed for me when I joined Lunar Logic. On one hand, it was a nice surprise how transparent the organization had been. On the other, I kept my attitude and over time we were becoming more and more transparent.

Up to the point now, where there’s literally no bit of information that is not openly available to everyone at the company.

Personal preference aside, my argument for transparency is that if we want people to get involved in making reasonable decisions, they need all relevant information available at hand. Otherwise, even if they are willing to actively participate in leading the company, the decisions they make will mostly be random.

From this perspective, we need to escalate transparency really quickly. Let me give you an example. If someone is supposed to autonomously make a decision whether they should spend a day helping troubled colleagues in another project they should know the constraints of both projects: the one that person is on and the one that requires support. Suddenly we are talking about daily rates that we use to bill our clients and expected revenues of the two projects in the long run.

One argument that I frequently hear against making the commercial rates transparent to employees is that they will see how big the gap is between the rates and salaries and they will feel exploited by the bosses. Well, that may be true if they do not understand the big picture: overhead costs and its value for the organization, sense of stability and safety provided by a profitable company, etc. Such a discussion, in turn, means making the financial situation of the company transparent too. We go further down the avenue of transparency.

And then, one day you realize that a professional services organization has roughly 80% of its costs directly related to labor cost. In other words, it is hard to meaningfully discuss the financial situation of the company if we have an elephant in a room: non-transparent salaries.

Transparent-Orgnazation

That’s basically the path we went through at Lunar Logic. I don’t say everything was easy. Unsurprisingly, the hardest bit was the change toward open salaries. By the way, there’s a longer story how we approached this part: Part 1, Part 2 and Part 3.

There is, in fact, a meta-observation I’ve had when we’ve been going toward the extreme transparency that we have right now. Reluctance to provide transparency inside a company has two potential sources: the awareness that people are treated unfairly (more common and in a vast majority of cases true) or lack of faith that people would understand the full context of information event if they knew it (less common and typically false).

Since salaries are a fairly sensitive topic they serve as a good example here. Typically the biggest fear related to the idea of transparent salaries is the fact that what people earn, at least some cases, is unfair. Therefore, transparency would either trigger requests for raises or dissatisfaction that some people are overpaid (or most typically both). This is a valid point, but one that arguably should be addressed anyway.

The argument that people would not understand the context rarely holds. We trust people to sensibly reason when they solve complex technical and business problems when in the context of product development. That’s what we hire them for. Then why shouldn’t they be capable of doing the same when talking about the company they’re with?

Besides, transparency enables trust. In this case, transparent decision makers help to build trust among those who are affected by these decisions. It tweaks how the superior-subordinate relationship is perceived. It wasn’t that much of an issue in our case as we have no managers whatsoever, yet in most workplaces this will be an important effect of introducing transparency.

There are two key lessons we learned from our journey. One is that transparency triggers autonomy. In fact, it is a prerequisite to introducing more autonomy. And, as we know, autonomy is one of the key factors responsible for motivation. In other words, to keep people engaged we need a healthy dose of transparency.

The other lesson is that transparency makes everything easier. Seriously. While the process of enabling autonomy may be a challenge, once you’re there literally everything is easier. No one thinks about what can be shared with whom. If anyone needs any bit of information they simply ask a relevant person and they learn everything they want to know. Decisions have much simpler explanations as the whole context can be shared. Discussions are more relevant as everyone involved has access to the same data. Finally, and most importantly, fairness becomes a crucial context of pretty much all the decisions that we make.

I can hardly picture myself in a different environment, even if I spent most of my professional life far from this model.

And that’s only one perspective of transparency. We can also look how it affects our relationships with clients. But that’s another story.

To be honest, I hardly ever stumble upon a situation that I have trouble finding a satisfying solution to my problem over the internet. And yet it happened to me last week. I was thinking of a way to improve the design of an application that our awesome interns, Asia and Przemek, are making. The app is really simple: it’s for rating submissions of people who want to participate in a Rails Girls event. In the app you can log in, view a whole list of submissions, filter them by the rated/not rated condition and view a single entry. It’s in the single submission screen where you can rate and click previous/next arrows to view another application. People who use the app usually go to the view with a list of unrated submissions, go to the first or the last record, rate it and navigate with arrows to the next one.

Since I always try to find a way to improve user experience, I started thinking what could be done there to make rating many, many, many submissions in a row a pleasant and not a daunting experience. Usually, a user doesn’t even have to scroll down the page, he or she quickly scans a description of a wannabe attendee and rates them on a scale from 1 to 5. I thought it would be a nice touch to add a cute animation to the rating form, one that would make a person feel satisfied and want to click again. I started browsing the internet in search of inspirations for such an animation and I couldn’t find anything that was really satisfying. That’s when I knew that I needed to craft this cute interaction myself. And hell, why not share it with others if they are ever in need of creating a similar experience?

Starability.css rise

I’ve decided to prepare the code in a way, that would be easy to use for everyone. I chose the simplest way: put the code on GitHub in a form of a small library with separate files for each animation. You can find it under the name of Starability in our Lunar repository. The name is a combination of two words that explain library’s purpose best: to star and accessibility (or just ability in general, if you like it better). Why accessibility? Because what I ended up with is a cute rating widget fully accessible by keyboard. Yay! You can go to Starability demo page to play with the animations or visit GitHub repository to see the code. There are only a few versions of the widget for now, but I am hoping to add more soon. ;)

 

Starability fading animation

 

Technique explained

Since I wanted to make rating accessible by keyboard and didn’t want to make the intern’s little application heavy by using loads of JavaScript, I’ve decided to use the accessible star rating widget technique by Lea Verou and enhance it with my animations. To understand the technique better you can read the following code with commentary (you don’t need to understand it to use the library, though!). In short we have a collection of radio buttons, which are in inverted order, and we take advantage of sibling combinators: ~ and +, to target elements that are after the input with :checked state.

Knowing that, we have a fieldset that looks like this:

Rating form with no styles

 

And we basically float radio buttons to the right, which lists them in the direction from 1 to 5, not as they appear in the markup. The only disadvantage of this technique is that when navigating the stars with left and right arrows, they are highlighted in the reverse direction to what you could expect. It is a bit confusing for us, but it shouldn’t be a problem for a person using a screen reader because the rates will just be read in a descending order. Also navigating with up and down arrows works as expected.

We hide the inputs themselves and add styling to the labels so that they appear as block elements with stars as background images. Label text colour is transparent and will still be read by screen readers, so everyone can know which rank is being marked. I am using background images in labels, not the Unicode characters, as some of the screen readers read the :before and :after pseudoelement content.

Now we are just one step from being able to highlight the labels that appear to the left from the checked input. To achieve this we just need to use this clever selector that takes all labels after the mentioned input with the state :checked.

The rest of the CSS is cosmetic changes. Of course, there is a cherry on the top: animations. They are implemented in a very simple way: all labels have an :after pseudoelement that is hidden until we check one of the radio buttons. Once it is checked, we show the pseudoelement and it triggers its animation.

Starability growing star animation

Accessibility, performance, other long words

To make rating even more accessible, I’ve added a delicate outline that shows us which element is in the focus state at the moment: it is useful for a person that can see but doesn’t navigate the website with a mouse or a touchpad. It is always visible in WebKit based browsers and visible only while navigating with a keyboard in Firefox. If you don’t see a need for that in your app, you can easily disable it by deleting/commenting out 3 lines of code.

Another thing to note is that stars are highlighted on hover. To have this effect we are changing the background image position of a label. This is an action that causes website repaints whenever the hover is triggered, so you if you are a performance junkie you might want to turn that off too. Starability.css readme explains how to disable both of the mentioned behaviours easily.

Customisation? Why not!

If you are well versed with SCSS you can easily adjust the rating widget to your needs, e.g. have a 10 star based system or turn off the previously mentioned outline and hover. It can be done by setting true/false values to the variables and running a gulp task to process files. Of course to have 10 stars system you also need to add additional radio inputs in your HTML. It is explained in detail in the reference.

Grab & enjoy

If you like this small library feel free to use it in any way you want: it’s open source and I don’t mind you just copying and pasting the code to your app – as long as the web is more accessible and beautiful I will be happy! If you have any questions feel free to write a comment here, ping me on snapchat or Twitter.

1-ZEN

Being a software dev is an exciting adventure and a great way of life.  

It’s not all moonlight and roses, though.

Numerous challenges await you down the road. Nemeses who will summon distress and anxiety for you. They will tamper with your mood, undermine your confidence, jam the performance and turn your efforts into dust.

If you’re an emotional person, like me, then you know how easy it is to subdue to them.

But fear not, my friend!

There are ways to defeat the gloom. Let me share some of the tricks I am using while fighting off my everyday enemies.

 wall1. The Wall

This one comes from Robert Pankowiecki: How to get anything done.

There are times you’re just stuck. Be it a bug you can’t find, a problem you don’t know a solution for or a new tech you’ve never tried before. You feel intimidated and afraid. You want to get out, forget and procrastinate.

It’s fine. Don’t fight it.

Instead: accept these negative feelings and… just start.

It’s not easy, quite the opposite, I know. The trick is to realize: worrying gets you nowhere, bad feelings will remain intact.

But once you start, even with the smallest thing, and you progress, these feelings will start to fade away.

Remember to bite off the smallest possible piece for a starter – it’s just easier to digest.

To make it more effective, you need a little mind trick, a little routine.

A completion ritual.

It may be something as simple as pulling a card into a ‘done’ column on your Trello board. Ticking a checkbox on a todo list, going for a smoke, if you please. Whatever works for you.

It’s such a small, seemingly irrelevant thing and I’ve been failing on this for a long time. I didn’t see the value. But it can work magic.

Did you know that forcing yourself to a fake smile actually makes you happier? This is similar. The completion ritual has the positive effect on your brain, no matter how small and trivial the tasks you finish may seem to you.

 shame2. The Shame

So you’ve started. And you’ve written some good code. It’s decent, you’re proud and happy with it. All good. And then, after a couple of months you want to add a feature. You look at your previously-super-duper code and all you can think of is “Man, who wrote that crap?” Ask your experienced colleagues how many times they have felt the shame.

It’s fine. Don’t fight it.

It means that you’ve progressed, that you’re growing, that you can see your mistakes. Nevertheless, you still feel bad and ashamed.

The key is to understand that, just like you’re not your 8th grade English paper nor your college entrance score – you are not your code.

My tip here is very simple to grasp and difficult to master: detach yourself from the results, treat them as external to you as possible. It’s not going to happen overnight but if you keep reminding yourself often enough – you’ll get there.

 imposter3. The Imposter

Sometimes your code looks gross to you but there are people around saying it’s good. Users are giving feedback: “Hey, thanks, it solved my problem!” Your colleagues are appreciating your work, heck, you may even be getting a promotion.

And then, a funny thing happens – you feel like a fraud.

It’s fine. Don’t fight it.

It’s a proven psychological phenomenon called Imposter Syndrome. I rarely meet a developer who is completely free from it.

Dealing with imposter syndrome is arduous and I am still looking for my ways.

Please check out these articles for some tips that may work for you: How I fight the imposter syndrome, Feel like an impostor? You’re not alone.

imposter-syndrome

source: @rundavidrun

Keep in mind:

It’s not who you are that holds you back. It’s who you think you’re not.
~Denis Waitley

expert

4. The Expert

Knowing you’re not a fraud is one thing, but this alone doesn’t make you an expert yet. Speaking of experts, I absolutely love this definition of an expert:

An expert is a man who has made all the mistakes which can be made, in a narrow field. 

~Niels Bohr

And it’s really simple as that. Go, do your mistakes. Fail, fail and then fail better. Take a look at the picture. This is Lunar React workshop. These guys have years of experience in their respective fields. Wojtek has been testing apps on java, c and rails platforms for years. Ania is fluent in ruby, js, objective-C, swift and what not. Cichy, my good friend, is my js go-to-person. To me, they are all experts in their respective fields. And yet, guess what day was the workshop happening?

Lunar React Workshop

Lunar React Workshop

Saturday.

These guys came to the office on their free day and studied React for 8 hours.

The message here is clear. Keep learning and accept the truth: You will suck in the beginning. But then again:

It’s fine. Don’t fight it.

Sucking at something is the first step to becoming sorta good at something.

~Jake the Dog

perfectionist

5. The Perfectionist

Needless to say, in the beginning you’ll make a graveyard of mistakes and your work will be far from excellent. You’ll encounter complex problems with many rational solutions and it will be difficult to decide which way to go. Should I use inheritance or mixins? Does this belong to a separate class? Am I using too many mocks in this test? Questions, questions. Questions everywhere.

It’s fine. Don’t fight it.

There is always more than one solution to a given problem. There is always something you can fix or refactor forever. There is never one definite answer to a design problem. The golden answer to any architectural question is “it depends”. Every design decision has it’s tradeoffs. Learning how to assess these tradeoffs is a lifetime challenge.

If you ever happen to delve on some issue for days, remember: better done than perfect. Don’t try to reach the absolute. Focus on delivering and take shortcuts if you need to. We all did. Sometimes we’re laughing at it:

shady_tricks

Must-have programming books

The truth is we’ve all made those shady things. Who has never copy-pasted some code from Stack Overflow? Googling an error messages? Every freaking day. Trying stuff until it works? The story of my life.

They are probably not the best practices but you should not hesitate to use them. If it helps you to move on, to deliver, to solve a problem you’re stuck with – do it! You’ll revisit later. Or not. The world is not going to fall apart.

 hermit6. The Hermit

It’s fine. But fight it. Don’t go alone.

The biggest mistake I made in early days of my career was not engaging enough with the community. You know, social fears, low self-esteem etc.

Find yourself a programming buddy, a mentor, go to a local programmers meetup and leverage social media (Programmers on snapchat).

Find people who are interested and talk to them about what you do. There are lots of them out there waiting to listen and to help you.

Programming is not a solo act, it’s a team sport. And it’s not so much about the code as it is about the people.

Final round

No one said it’s going to be easy. The enemies are real, the challenges are big.

But once you learn how to deal with them, once you manage to reach your inner zen – you’ll be rewarded. If you’re lucky you may even get into the state of flow. And then you know, you’re in the right place.

For me programming is a satisfying job and a one that keeps me in a positive state of mind for most of the time. A mental shape, in which I feel that I am constantly growing. Not only in terms of technical skill but, more importantly, as a human being.

Do you have similar experiences? Or perhaps you have other enemies you’re fighting every day? Please share your story in the comments and let’s talk about it!

PS. All drawings by the one and only Gosia.

A story that I frequently share when speaking at conferences is the one when I finally re-hired Ania. Once she agreed to rejoin us, I was so ecstatic to share the news with everyone. And then Tomek and Marcin popped up at my desk with sad faces to tell me: “Pawel, it’s not how we hire here anymore.”

One thing I realized back then is how successful we were with distributing autonomy. After all, two developers telling the CEO that he had no right to make a call about hiring a new employee tells quite a story. Another thing though was how much our hiring process had evolved up until then.

Since that awkward conversation, evolution has continued. While I think that by now we have a fairly stable hiring process I do expect it to be subject to change in future. After all, our experience in recruitment, as well as awareness of what we are looking for in candidates, improves as we get more and more chances to practise.

Hiring_What_We_Are_looking_ForWhat We Are Looking For

First things first. What we are looking for when we talk to a candidate. As the lead text on our job page suggests technical skills are only a part of the story, and not even the most critical one. Don’t get me wrong. It’s not that you can slip through hiring after finishing a couple of online courses for software developers. In fact, the technical bar is set fairly high.

You can, however, be a damn good developer and still fail. I told you: engineering skills are neither the only nor the most important skill set that we seek.

By the way, while in the examples I will use an archetype of a software developer as this is the most common role at Lunar, the story is true for graphic / UX designers, testers and all the other roles too.

The lens we use to look at engineering skills of a candidate is craftsmanship. I tend to phrase what craftsmanship is by saying that it’s taking pride in the quality of work that we do and continuously looking for better ways of doing things.

In a way, it is fuel to our personal learning vehicles. We want to get better at what we do because that’s who we are.

And that’s crucial as whenever we hire a person we don’t hire them for what they know right now; we buy their long term potential. That’s why a craftsman with lower technical skills will most likely win with someone who is already damn good but who doesn’t share that attitude.

Then we have the most important set of traits that we look for. Team skills.

I guess the meta-trait that we seek, and you’d find a hint in our job page, is that we want people who join us to notice and help a troubled colleague.

It means an understanding of teamwork as collective effort as opposed to an independent race for everyone involved. We are only as fast as the slowest person on the team. This rare trait basically makes everyone else on the team better. It is the true north for us. Interestingly, it doesn’t matter that much whether that person is the most technically skilled engineer in a team.

It means perception of others. How they are feeling. How they are acting. How they are behaving. It may be through empathy. It may be through sensitive perception. It may be through conscious effort. Whatever the means, we want team members to notice others.

It also means acting on what you see. It’s not only a willingness to help others but also how one helps. Inflicted help is often counterproductive. We need to understand others enough to know what kind of help they look for and what kind of help they are willing to accept.

It takes a lot of soft skills and awareness to excel at that. While we don’t look for perfection on this account, we need people who show the potential to excel as team members and leaders. Interestingly enough, the way we understand leadership means that we expect everyone to be a leader in a fitting context.

Oh, there’s one more skill that I keep forgetting about. You can score high on everything else but if you don’t speak really good English then it’s a no. For us, communication in English is like a driving licence for a driver. We. Really. Need. That. No. Kidding.

Hiring_Cultural_Fit

Cultural Fit

There is also something that is fairly vague but super-important. We look for a cultural fit. This one requires a little bit of explanation, though.

Most frequently when I hear about cultural fit and I have a chance to ask what people think when they say “cultural fit” I end up disheartened. The most typical notion of cultural fit is people who we would get on well with. Well, there’s a huge problem with such an approach.

In general, we tend to like people who are fairly similar to ourselves. Folks who have similar interests, similar walks of life or similar characters would likely be those who will make us feel most comfortable. The problem is that they are culturally very similar to us. In other words, if we defined cultural fit this way it means that we’d end up with a very homogeneous culture.

We’d feel comfortable at the office. That’s for sure. Would it help us to be more effective? Not at all. Conversely, we’d risk a close encounter with a phenomenon called groupthink, that leads to conformity and lack of critical evaluation. Nothing that you’d want to see in an industry that has solving complex problems at its core.

The way we understand cultural fit is that someone fits our (very broad understanding of) culture, shares our values, but at the same time would stretch that culture in a way. We want people who won’t introduce too much friction. But some tension is actually a plus.

We want people to have the potential to lead us in at least one of many dimensions which we want to improve. Be it how we learn, a technology skill, interpersonal dynamics, atmosphere, organizational stuff, empathy, respect, etc. It doesn’t have to be something specific. It should be something, though.

In other words, ideally, we look for people who would be thumbed up by most of us (a signal of fitting into the broad culture) and thumbed down by few (a signal that we’d likely feel pulled out of our comfort zones by a candidate).

Hiring_Process

Process

Now you know what we look for, here’s how we look. We kick things off with technical evaluation. Yup, you hear me. I mentioned that technical skills are neither the most important nor the only part that we focus on and yet we start with this.

The reason is very simple. We want to make sure that a new hire won’t be a burden for their team. In other words, wherever the bar is for a specific role, we want a candidate to be above that. Now, depending on the context, we’d expect different expertise levels. We don’t expect interns to be ready to instantly jump into a commercial project (even if it has happened here before). We don’t want developers to have a half a year learning curve before they are ready to get the ball rolling on a billed project either.

Anyway, the critical part of that stage is that as long as a candidate is above the bar it’s fine. It doesn’t matter whether it’s slightly above the bar or it’s more like “phew, that was left-handed” kind of case.

Ultimately it’s a kind of a filter. Depending on the context it may consist any of these three elements.

Recommendation

We trust referrals made by Lunar folks. If one of us knows someone and strongly believe that they’re a good fit then it’s a pass.

Interview

A short, typically around a half an hour long, interview with a couple Lunar engineers. In other words, a candidate would be talking with fellow developers, designers or testers and not a manager (we don’t have one so that would be hard anyway). We focus mostly on technical stuff but the most obvious personal characteristics will be evaluated too. Normally the interview is done at the office but it occasionally occurs on a video call too.

Homework

Most likely practised when it comes to internships but we may also try it in other cases. We give a candidate homework to do and then evaluate the result. A follow-up to the homework will almost always be an interview. Homework may mean writing a bit of code, which is the most common case, but it can be as weird as asking candidates to read a book.

Once a candidate makes it through the initial filtering, which may require literally nothing on candidate’s end in the case they had a strong recommendation, we’re down to the last part. We call it Happy Hours and originally it was dubbed Demo Day.

Happy Hours means spending a few hours with us, typically between 4 and 6, during our regular workday. The goal is twofold. First, we want a candidate to develop an opinion whether Lunar Logic is a good place for them to work at. Second, we want to give all Lunar folk an opportunity to develop an opinion about the candidate. The latter part is not mandatory, i.e. everyone at Lunar is invited to take part in Happy Hours but nobody is forced or even encouraged to do so.

The activities that happen during Happy Hours would vary from those focused on craftsmanship (e.g. pair programming), though those that validate how one thinks (e.g. design workshop), to those that tackle soft skills (e.g. a chit chat about candidate’s stories of teamwork).

Happy Hours are not structured so it feels a little bit like passing a candidate from one person’s hands to the others. There’s quite a lot of loose discussion in the kitchen in a bigger group too. Ultimately these kind of interactions would be happening on a regular day at Lunar.

After Happy Hours, we collectively make a decision. It follows the Decision Making Process pattern so there’s a broader discussion among everyone who took part in Happy Hours. Then someone makes an autonomous call whether we make an offer or not.

We don’t seek consensus in these discussions. However, for a positive decision, a significant majority for “yes” is typically required. In other words, a set of fairly evenly distributed opinions (some for “yes”, some for “no”, some for “meh”) most likely means a “no”.

The last step is figuring what a financial offer for the candidate would be. To simplify things we typically run a super quick salary process, exactly the way we do it for ourselves when we discuss raises. This makes the offer fair in relation to current salaries at the company.

It does mean that, while we obviously are interested in how much you expect to earn, we will propose a salary. Since the offer aims to keep our payroll fair it is highly unlikely that we’d be open for heavy negotiation. As one famous football coach said: no player is bigger than the club.

That’s it. It may sound like quite an elaborate thing but in some cases, it boils down to fairly informal, few-hour long visit at the office.

I’d like to write something like “Since we adopted that technique…” but it wouldn’t be true. There wasn’t a single point when we started hiring this way. It is more an outcome of how this process evolved over time. Throughout this evolution, we’ve been making fewer and fewer hiring mistakes, which seems to validate how well the process works for us.

Some time ago

Few months ago we published a series about creating applications using React and related architectures (flux and redux).

Since then, we also started using React Native and we think it’s awesome!

reactive-native-logo

The goal of the post

This post is not meant to be yet another React Native tutorial because I believe there are already many good ones available all over the Internet.

I just want to show you the app from my previous posts, implemented in React Native. I’m really excited about how similar it looks to the web version and how smoothly you can implement things.

Let’s build the app!

package.json

We will install very similar list of dependencies as in we did in the webapp:

We can leave out the dependencies related to making the app universal because there is no such problem in the native app – we don’t need to add server-side rendering because native app is not crawlable, we don’t have to worry about SEO.

More information about the importance of the universal feature of a web app can be found in the first post of the series.

Also, it is not necessary to explicitly define the babel stuff, because it’s part of the React Native dependencies.

Lastly, we don’t need to worry about building and compiling assets – React Native does this for us. Therefore, we can leave out Webpack.

index.ios.js

Entry point the whole application is index.ios.js:

There we render our main component – App.

App.js

Let’s create a src directory, where we will put all of our source code.

Also, create a components directory inside, we will store all components there.

So let’s add first one – App.js:

It looks very similar to the application.js file, we had in the web version. The whole idea is the same – we use the same redux library.

store.js

To make it work we need to add a store:

This is also almost the same as the web version. The only difference is that we can remove all stuff related to making the app universal – so store is even simpler.

Routes.js

On top of store we also need Routes:

I found react-native-router-flux very convenient to use. And again, it’s very similar to react-router that we used in the web app.

SubmissionsContainer.js

Now we can finally define our first container – SubmissionsContainer – which will be responsible for displaying the submissions list:

This is the simplest version that you can implement and the most similar to the web version. But you can also use React Native built-in component that will be translated to the native list control. See the documentation here.

Other redux stuff

The rest of the redux stuff here is exactly the same – create action_creators, reducers, constants and lib folders (as we had before) and put there SubmissionsActionCreator, SubmissionsReducer, ActionTypes and Connection accordingly.

My favourite feature

Apart from developing for two platforms at once, for me, the killer feature is that it’s so easy to do pretty things. The thing I hate while developing a pure Objective-C/Swift app is that many things related to the UI (like colors, font sizes, etc) are mixed with application logic.

Of course, you can set some things using Interface Builder but there are a couple of problems with that. Firstly, I’m not a big fan of Interface Builder. Secondly, not everything can be set there and it’s just better to have main colors and things like that saved in constants.

React Native uses Flexbox for styling your components. Although not all the features from web flexbox are implemented yet, I love it!

And styles are the last file needed to make our app working:

Run

Now you can open ios/ProjectName.xcodeproj file in the Xcode and run the app!

react-native-screen

Platforms supported

At the time I was playing with React Native, there was support only for iOS and Android.

While developing an internal project I noticed that, for now, iOS is supported better. But provided you don’t do too much custom stuff, you should be good with Android too.

If you implement the app with common controls you should be fine. If you want to do things like drawing custom shapes using native libraries, you need to check Android support before ;]

Just a week ago, React Native team announced that they joined forces with Microsoft bringing React Native support for the Universal Windows Platform. It means that it will be possible to develop for Windows Desktop, Xbox and Windows Phone in React Native.

I didn’t test this yet, but looks promising. See more details here.

Differences in native controls between platforms

You are probably wondering what if you need to make one version (e. g. Android) different.

That’s not a problem! You can easily make different components by just adding platform suffix to the file name – e. g. SubmissionsContainer.android.js.

React Native will render proper one depending on the platform.

Summing up

Implementing React Native is smooth and enjoyable. The only thing you need to keep in mind is that this is a tool for doing front-end. If your mobile application needs to perform high load operations then it’s not the right tool for you. But if all your high-performance operations happen in your backend and you use the mobile app only as a client, it’s very convenient to use.

puzzled-cat-01The problem I have with functional programming concepts is that whenever I learn about them, it’s usually about monads, closures, folds, infinite streams etc. I know they are cool and all but, honestly, I rarely see a good use for them in my daily work. I am a Ruby dev mostly; I like to get stuff done without too much ceremony. And I really like OO, despite all its shortcomings. There are times, however, when situations call for something better.

This is a story of how we ended up with pretty cool functional code in an evolutionary way.

Context

We have a project, which is a tool that enables users to get a mortgage online. Actually, it’s the first app that lets you get a mortgage sitting at home in your pajamas. 100% online.

Firstly, customers fill in lots of data about themselves and the property, then the system presents a list of mortgage offers from various banks. To get those offers as exact as possible, we need to calculate many determinants (things like property pledge, financial portability, amortisation, etc.). These calculations are pretty straightforward, but there’s a lot of them. Moreover,  they’re interconnected: results of some may be used in following calculations down the road. Finally, we end up with few important values (which determine the final mortgage decision) and a considerable amount of data, all of which needs to be stored e.g. for presentation purposes.

MVPs are like cheap wine.

Cheap wine is good because it’s good and cheap. So are MVPs. You’ll want something better much sooner than you’d expect.

At first there only were 2 banks and one robust algorithm that was separated into many classes for clarity. Then, as we added more banks to the platform, things started to get complicated. Various requests from clients began to emerge: calculate retirement age differently, use gross income instead of net income, divide instead of multiply, etc. Different banks had different formulae for things. You get the idea.

Enter the strategy pattern.

We started spinning-off parts of the algorithm to separate classes and injecting them dynamically into the main template. Nothing unusual – classic strategy pattern. It all looked good. But it grew and grew, and then it grew just a tad to big. The code became messy and unreadable. Strategies started to have their own sets of strategies; layers of abstraction were multiplying like crazy and it was killing us. For newcomers to the project, it was almost impossible to understand what was going on. The domain knowledge was lost between the lines. The bus factor plummeted.

The project was live and starting to generate income, but adding each new bank to the platform was taking 1-2 weeks. It was a crucial process for the business and it simply took too long.

It never rains but it pours.

As if this wasn’t bad enough, then came a real bummer. A new feature request for a view with a summary of all the calculations. Now, not only did we have to save all the numbers, but now we also had to persist the formulae used to calculate them… How can we add another layer to this already messy code?

We didn’t. We created a separate set of decorators just to handle this. It worked for the moment, but now the knowledge was in two separate parts of the system. We were facing a shotgun surgery issue on top of all the previous problems.

We realised, it’s time to take a step back and reassess.

After talking to our client, we decided that we are going to spend some time to refactor and pay back part of the technical debt.function-cat-01

Back to square one.

Refactoring started with gathering the requirements:

  1. We have initial data, mostly numbers, and booleans coming from user input.

  2. We have an ordered list of calculations to be performed on the input data. These are our previous strategy objects.

  3. We should be able to reuse results from all calculation steps.

  4. We need all the results in the end.

  5. For some of the results we need not only values, but also the formulae.

  6. We need all of the above to be as flexible as possible. When a new bank joins the table, we should be able to adjust independent parts of the algorithm without too much hassle.

Input and output. United we stand.

The input hash:After calculations the output will look like this:

Strategy objects.

What are the strategy objects we’ve been using so far? They are the atomic pieces of the algorithm. Like steps in a cake recipe. Do we really need the OO boilerplate? Strategies could be stateless, so why not just use functions? Oh, it’s Ruby. There is no first-class function concept. Perhaps, we could use lambdas. But we’d like to get the strategies tested and possibly reuse some of them for various banks. How about modules with one static call method? Since we are passing entire data hash to each calculation function, we need to “swallow” unnecessary keys. This is a moment when Ruby’s keyword arguments and the double splat operator come in handy. Dig it, it’s awesome.

Banks.

Bank parameters are an example of externally configurable factors, you can get them from the DB for instance. The evaluation steps are what’s interesting here. It’s a line up of calculations to be performed on data. This is each bank’s recipe for a the final answer.

Putting it all together.

This is our entry point:Let’s roll. We pass the hash from one step to the next one, using inject method. Each one is taking whatever it wants from the hash, working on it and adds the result as a new key-value pair to the hash. In the very end, it’s all in the final hash (it smells a bit of primitive obsession, but let’s keep it simple for the sake of example).

New boys in town.

When you need to add new strategies, which may be different for the banks, you’ll do it like this:

Fail Better.

One problem that we’ve encountered, were ambiguous error messages when strategies couldn’t find the required key. With a little bit of ruby magic we’ve managed to improve that.

Now, when you get the error, you know exactly where to dig:

Final thoughts.

sleeping-cat-01The solution meet requirements mentioned above. It looks simple and indeed it is. Not only it is easy to use but also elegant and extensible. We’ve been using it in production for 4 months and haven’t encountered big issues so far. What’s more important, however, we have successfully reduced the time needed to add a new bank to the platform from 5-10 days to 2-4 days. It’s something.

Additionally, testing is now super easy. You can unit test each atomic strategy independently.

As a bonus, if you’re as lazy as we are, you can always make an inline strategy by defining a lambda in the evaluation steps template like this:

 

Of course, this is not a silver bullet and has issues of its own.

The high connascence of name for input and output keys is the biggest problem. We work around that with one integration test for each bank to make sure that we’ve got good coverage. Another problem is that modules can’t have static private methods, which would be helpful for some more complex strategies. Should you see any other issues, please let us know in the comments.

All in all, it’s been an interesting exercise for us and a one which proves how flexible and fantastic the Ruby language is.

Please do share your thoughts in the comments.

PS. If you’re wondering what happened to the formulae requirement, stay tuned. We’ll cover that in the second part.

snapcode-face

szynszyliszys

UPDATE 22.04: This post has been updated with new information on 22.04.2016.

As you may know already, we are quite a unique company when it comes to organizational culture. We value empathy and transparency. We collectively manage the company. We have open salaries, advisory process, collective hiring, self-organizing teams etc. For the last 3 years, we have been evolving toward becoming a no management organization.

We’ve been blogging and tweeting about this for some time now, but today we want to try something new: Lunar on Snapchat!

We are heavily inspired by Andrzej Krzywda and his ongoing experiment Programmers on Snapchat. The idea is neat and simple: share programming related content on snapchat and create a community using this new, ephemeral medium. Please do read Andrzej’s article to learn more about it.

favicon-120 Update 22.04: Andrzej have created a very simple app: DevSnap: a directory of developers on snapchat. It’s growing like crazy. At the time of writing this post we have 55 developers. So please check it out and follow these programmers, they share good content. Also, don’t hesitate to add yourself there, even if you don’t snap much. You’ll start soon enough :) And once you’re hooked, you’re hooked.

Besides taking active part in this experiment, we want to add another layer to it. A little more personal one. We strongly believe that programming is not only about code and tools but, most of all, about people and interactions (sounds familiar?). So we are going to share snaps from our day-to-day office life.

Do you want to know what Lean Coffee is? How Happiness Chart works? How feedback works in no-management environment? What we do to keep our company going in the participatory leadership model?

If you are interested in all this and want some first-hand experience – follow us on snapchat! And do expect solid amount of inspiration. Of course, we are going to share a lot of programming stuff too and some personal snaps here and there.

Find our snapcodes below. Scan them or add us by usernames. We’ll add you back :)

Tomek @ Lunar: rusilko, Artur: arturtrzop,  Ania: szynszyliszys, Dawid @ Lunar: cichaczem

10289858_10153795931557745_8215933735273622601_nsnapcode   cichaczem_snapcode-1


Update 22.04: New folks on board:

Paweł @ Lunar (our CEO): pawelbrodzinski, Tomek Giereś: tomaszgieres, Maro: mareczekc

Screen Shot 2016-04-22 at 08.32.15Screen Shot 2016-04-22 at 08.31.24Screen Shot 2016-04-22 at 08.52.54

Wroc_love.rb conference has, yet again, lived up to expectations. When I attended Wrocław’s Ruby conference two years ago it was a real eye-opener for me. And one which shaped my personal development as a coder. This year, I wasn’t expecting that much, but still went to the Silesian capital with a fair amount of excitement and high expectations.

The feels

wroc_love.rb 2016 conferenceWroc_love.rb conferences are funny in a way. On one hand there’s not much heat nor energy (the opposite to what I experienced on BaRuCo or Craft for instance). It seems slow-paced and sleepy. People are lazily flowing through the UW corridors, quietly chatting or typing something on their phones. No hassle, no noise, no craziness (it starts late at 11 AM). However, in some inexplicable way, this atmosphere is stimulating, inspiring and energizing as hell.

Maybe it’s the careful choice of talks that are aimed to address a broad range of disciplines. Perhaps, it’s the discussion panels with experienced programmers. Or it could be due to this insane idea of bringing concepts from other technologies to the Ruby world?

Whatever it is, it works. And surely, it’s worth a trip to Wrocław.

The talks

Let me quickly recap few talks that stuck with me:

Basia Fusińska got us started with a well-prepared lecture about the R language. In an entertaining and engaging way, Basia walked the audience through crazy features and syntax quirks of a language created by statisticians. Although some of these quirks are utterly insane, it is always valuable to see something entirely different than Ruby. And since we are currently writing some R code in Lunar, it was good to learn few new tricks.

The first discussion panel was the classic “vim vs. emacs fight”. Only taken to the whole new level, for everyone interested in how other devs set up their optimal workspace. We’ve covered Vim, Rubymine, Atom, Sublime Text, Spacemacs and even good old TextMate! Many pro-tips collected. For me – vim power user – the winner is surely the map-capslock-to-escape trick. Though I must admit that after seeing Tatiana Vasilyeva’s Rubymine presentation, I am seriously considering switching to the JetBrains product. The only question is: does it have vim mode?

The second day brought us some more serious stuff. Deployment. It’s not my thing really, so I was happy to hear how professionals are solving various day-to-day admin problems. From server configuration to deployment scenarios, to monitoring, to backup strategies, whatever question might have been troubling you, you had an expert answer on the spot.

The “Lessons of Liskov”, with no doubt, was the best talk of the conference. In four acts, Peter Bhat Harkins:

– explained the difficulties you may have understanding the Liskov Substitution Principle;

– showed how to spot places in the code where there are “bugs waiting to be written”;

– demonstrated how to avoid “oh what the hell now” situations, when you get an exception five steps from where the bug is.

As a conclusion, Peter proposed to extend the definition of LSP principle to general Substitutability Principle, which boils down to the idea of writing more substitutable modules.

The lecture was very well received: “Accurate level of balance between abstract concepts and practical tips – just as I like it” to quote one of conference attendees.

Also, Peter turned out to be one of the best speakers I’ve had a chance to watch on stage: fluent, prepared and passionate. If you have time to see only one talk from this conference – choose this one – it will be worth your time.

12835006_1047128531976558_963600757_nPersonal agenda

This year’s conference was special for me because, for the first time, I was also a speaker. It was only a lightning talk, but still, it is a little milestone. I did a talk about Projectr – a data-driven estimation toolkit – our new Lunar toy. Please check out the slides and/or follow us, if you’re interested. We’ll be posting a lot more about this concept very soon. Plus, if you happen to have seen my talk – please send me feedback and hit me with any questions.

#

Final sentence

Wroclove.rb, regularly, proves itself to be one of the most inspiring conferences in this part of Europe. Loud congrats and many thanks to organizers, mentors, and speakers. Well done! See you next year.

CSS animations have been in regular use for a few years now. Used correctly, they are a fantastic way to enhance your website and help users understand interactions better. Unfortunately, as easy as they are to use, there is a high chance that you are forcing your user’s browser to perform costly operations that slow down the whole page. Let’s see: have you ever animated an element’s width, height or top position with CSS? If the answer is “yes”, it means that you triggered expensive layout recalculations that might have resulted in jank when viewed under certain conditions.

Getting to know our friends among animations

The best way to avoid laggy animations is to find ones making good use of our GPU and don’t affect the layout or paint of the website.That is why you should only animate transforms (translate, rotate, scale) and opacity. These properties should easily satisfy your needs when it comes to simple animations. Also, it is best to animate absolutely positioned elements, which won’t push other elements around the page. These two rules are already enough to speed up your framerate to 60fps and set the GPU memory buffer free in most cases. But that’s not all. There is one other handy technique that can help you create really lightweight animations.

The FLIP technique

Last year I had the pleasure to listen to Paul Lewis’ presentation on web performance. It truly blew my mind and buried in amongst few other interesting things there was this gem of awesomeness: the FLIP technique. The simplicity and the advantages of it made me LOVE IT. So what is the FLIP technique? FLIP stands for First, Last, Invert, Play. This quote from Paul’s GitHub repository for the FLIP helper library sums it up perfectly:

FLIP is an approach to animations that remaps animating expensive properties, like width, height, left and top to significantly cheaper changes using transforms. It does this by taking two snapshots, one of the element’s First position (F), another of its Last position (L). It then uses a transform to Invert (I) the element’s changes, such that the element appears to still be in the First position. Lastly it Plays (P) the animation forward by removing the transformations applied in the Invert step.

So basically, you remove transform instead of applying it. Why? Well, this means the browser already knows the points A and B for the element’s journey and is able to start the animation faster. The FLIP technique will give you the best results when an animation is played on user input. The difference might not be huge, but on a phone with a less powerful CPU this can be the difference between it feeling like an immediate or delayed response from the website. Once you get used to the idea, writing animations the FLIP way feels natural. Here’s a small code example using the FLIP technique:

As you can see, I just reversed the order of the animation. Instead of pushing the element 150px from the left to the right, I pulled it to the left with transition’s negative value and then removed that transition entirely (set transform value to “none”).

Building on a new discovery

What I discovered was that not many people seem to know this approach. I couldn’t get it out of my head and decided to do something to convince more and more people to join me on the journey to faster animations. I knew there were many popular animation libraries, eg. animate.css, but they did not use the FLIP method and included animations that might cause website repaints. Therefore, I made a list of moves that can be done using only safe transforms and opacity and decided to build a small CSS library that contains only lightweight animations. Once the animated elements are painted to the browser window (which is really fast btw!), they are running at stable 60 fps and consume next to no browser resources. There are no repaints after that, hence the library name: repaintless.css. The gif below shows the animation running in the browser with the Chrome DevTools FPS meter on:

Repaintless.css 60 Frames Per Second animation.

60 fps animation achieved with the repaintless.css library.

To show that repaintless.css runs really smoothly, I have prepared a small demo page. As I wrote before, the FLIP technique gives the best results when triggered on user input, so you can start animating elements by clicking “PLAY” on a middle square and see how fast the animation responds. The filters (for now visible only for 768px and wider screens) can help you test different animations individually.

If you are interested in using the library, go to the repaintless.css Github repository and follow the instructions in the readme. If you’d like to help me improve the code or just have an idea for an animation, a pull request is always welcome. Bear in mind that the repository is quite fresh and I am still fine tuning it. In the future, I plan to add more moves and enable custom gulp builds with only the animations you select. At the moment, to achieve that, you need to download the whole repository, remove the unwanted @imports in the repaintless.scss file and run gulp build. Not perfect, but doable. :)

With great power comes great responsibility

I hope that after reading this article, you’ll always think twice before coding animations and try to make them as fast and light as possible. There are plenty of great articles about performance, this one by Paul Andrews and Paul Irish is really worth checking out. Also, there is a terrific page that shows you how animating different attributes affects website load. With this knowledge and a little practice, you’ll become a performance guru in no time.

PS. I wondered how the performance would look like if I built the worst possible version of this animation. I decided to do a quick check with just one element from the demo animation. The result was outrageous! Even with all I’d learned, I didn’t expect so much lag. Shown in the gif below, I animated the margins (never do that!) so it goes from -200px left to -200px right margin (terrible!):

Terrible animation performance when animating margins to scare you off from doing this.

Terrible animation performance when animating margins.

Are you an awesome team player and love to spend time working with other people?  Do you have what it takes to be a software developer? Do you want to become part of the Lunar Logic team?

InternshipQALunarLogic

 

How about joining us for the internship?

  • 3 months
  • In Krakow (sorry, no remote)
  • Full-time or part-time
  • RoR + JavaScript (most likely React.js)
  • Start date: up to you

What we offer:

  • Support on your learning path
  • An unusual work environment with kudos, badges and board games
  • A lot of funLunarLogic-InternshipQA
  • Salary: 2.5k PLN net (for full-time)
  • Type of  the employment: up to you

What we expect:

  • Decent RoR and/or JS skills
  • Passion for learning
  • Empathy and interpersonal skills
  • Communicative English

Apply for the internship »

 

Applications are open until 26.02.

Erstwhile in the adventures series

redux

In the previous post we got to know Flux.

Full code of the application is accessible here.

We moved all the state modifications to stores, to have better control over the changes.

I’ve also mentioned that there is a mechanism for synchronising store updates. The truth is, though, in a complex application handling store dependencies that way can become messy.

In this post we will update our app to use another pattern, which evolved from Flux – Redux.

General idea

As I mentioned, handling store dependencies when you have many of them can be tricky. That’s why Flux architecture evolved introducing reducers.

A reducer is a pure function that takes a state and an action and returns a new state depending on the given action payload.

It’s good practice to return a new instance of a state every time, instead of modifying the old one. Such immutability increases performance during establishing the need to rerender. You can read a really good detailed explanation here.

The main flow looks very similar to the Flux one:

  1. every state change needs to be done by dispatching the action
  2. the store gets the payload and uses reducers to determine the new state
  3. the view (“smart component”) gets the new state and updates its local state accordingly

I recommend you to read more about reducers and Redux in general.

A thing worth emphasising is that there is only one store. You can, though, register as many reducers as you like while creating a store.

Let’s reduxify our app

You can now remove events and flux from the package.json.

And let’s add redux dependencies to our package.json by running: npm install –save redux react-redux redux-router redux-thunk.

Dispatcher

We won’t need our dispatcher implementation anymore, as there is one already in the redux library. Let’s remove it then:

rm src/AppDispatcher.js

We can also remove SubmissionStore (and any other stores if you added them).

We are going to create one general store.

Store

There will be one store class, but two instances – one for the client side and one for the server side:

There are couple of things going on here.

Firstly, we define middlewares we want to use in the store. We are composing them using compose method from the redux library.

I’ll say more about why we’d need any middleware later.

Secondly, we use the combineReducers method from the redux library to pass all reducers we need in our application to the store.

Reducers

The question now is: what are reducers?

Reducers are responsible for the state change.

They get the action dispatched from the component and calculate the new state if needed.

The whole application state is then passed to the component which dispatched the action and the component can choose what part of the state it’s interested in. More about this later.

Now take a look at our reducers:

When this reducer gets the RECEIVE_SUBMISSIONS_LIST action, it will take all the submissions that came in the payload (action.submissions) and map them to a hash with submission ids as keys and related submissions as values.

As I already mentioned, it’s good practice not to modify the state, but always return a new state object.

If you look at RECEIVE_SUBMISSION or RATING_PERFORMED, you can see that the new state is calculated using another reducer, SubmissionReducer:

Here we just return the submission from the action payload.

Action Types

The action types file looks the same as before, but we have more actions.

This is because previously actions got directly to the store where a request to the API was made and where the state was updated:

But now the store just gets the state from the reducers. And reducers get state by calculating it from the action. So we also need actions that will return data loaded from the API.

That’s why now we have separate actions to request data and separate actions to receive data.

Before, we said that an action is just a simple Javascript object. But having the above in mind, now we also need a mechanism for dispatching not only pure object actions but actions where we will able to perform a request to the API and dispatch an action with received data when the request is finished.

That is why need the middleware that I mentioned before. There is a library, implemented as middleware, called redux-thunk, which will allow us to dispatch this kind of actions.

We apply this middleware while creating the store:

You can also see here that we have a second middleware, needed for redux-router.

Action Creators

Thanks to redux-thunk we can now create the _fetchSubmission action:

As I mentioned before, we make an actual request to the API here, and in the success callback we dispatch a standard action with RECEIVE_SUBMISSION type, passing the loaded submission object to the payload. Now everything (state change) is in the reducer’s hands.

In the example we also dispatch an action with type REQUEST_SUBMISSION before the actual request is made. It’s not needed for loading the submission, but it might be handy if you want to react somehow to starting a request – like adding a loader etc.

In a real application, it would be also useful to add error callbacks the same way as we added successful ones.

Here is the full SubmissionActionsCreator example:

Submission Page

I’ve said that the dispatched action gets to the reducer, and the reducer calculates the state, which is used to update the store.

I’ve also said that the state is returned to the component which dispatched the action. Now we can see what it looks like:

Notice two important things here.

Firstly, we don’t use this.state anymore, we use this.props instead.

It’s possible because of these lines:

Thanks to these lines, the select method will be executed when the component gets the new calculated state.

In this select method you can choose which state parts your component needs.

As the component in the example is a component for the submission detailed view, in the select method we choose the submission with the id specified in params.

That’s why we can use this.props.submission in render method.

Secondly, notice how the action is dispatched – this.props.dispatch(performRating(this.props.submission, value)).

Thanks to the connect method we also have this.dispatch available.

Creating the store

Client side

The last thing we are still missing is actually creating the store object. We defined a method for creating a store, but we didn’t use it anywhere yet.

Let’s do this client side first. Edit your application.js to look like this:

Server side

And server.js:

Now you can see why we needed to define a method for creating the store.

It’s because a big part of the configuration (like reducers, middlewares) are the same client and server side, but some parts differ.

Notice that createHistory for client side is imported from history/lib/createBrowserHistory and for server side from history/lib/createMemoryHistory. It’s simply because on server side you don’t have browser.

Similar thing with reduxReactRouter – for client it’s imported from redux-router and for server from redux-router/server.

Full rendering on the server side

In the first post of this series I mentioned that our app will be universal, which means that it will render on the server side too, so we can benefit from better SEO.

But when you check your source code, you can see that although our component tree is rendered correctly, we still can’t see actual data being rendered on the server side.

They are still only visible on the client side. That’s because we use asynchronous requests to fetch the data, so the server renders the page before the request to load data is finished.

Now, when we have redux-router, it’s easy to fix. In routerState we have access to the components’ classes matched for this route.

Assuming that in each component that needs data fetched we’ll have a class method to fetch needed data, we can iterate through a given array and use this method.

Still the request will be asynchronous, so we need a mechanism for waiting for all the requests to finish, so we can finally render the page with all needed data.

Here is where Promise.all comes in handy. It does exactly what we need. You can pass an array with promises and you can invoke then, the same as on a single promise.

Now when we have a mechanism to retrieve the needed data before rendering a page, all we need to do is pass fetched data to the client side.

That’s why we needed window.INITIAL_STATE in our view. Server will save the initial page in the window.INITIAL_STATE while rendering the page. Then the client side will configure the store using this state.

Let’s update server.js then:

Add these lines above our main application div:

And add fetchData static method to the SubmissionPage component:

That’s all!

Full code accessible here.

Post image was taken from really nice Redux example with modern best JS practises.

Erstwhile in the series

flux

In the last post we created a simple application, using just bare React.

Full code of the application is accessible here.

The important thing to notice is that we hold the state of the app in many places. In a more complicated application it can cause a lot of pain :)

In this post we will update our app to use a more structured pattern for managing the state – Flux.

Why Flux?

Using bare ReactJS was easy, but our application is simple. With lots of components, having the state distributed all over them would be really tricky to handle.

Facebook experienced such problems, from which a very well known one was the notification bug.

The bug was that you saw the notification icon indicating that you have unread messages, but when you clicked the button to read them, it turned out that there’s actually nothing new.

This bug was very frustrating both for users and for Facebook developers, as it came back every time developers thought they already fixed it.

Finally, they realized that it’s because it’s really hard to track updates of the application state. They have models holding the state and passing it to the views, where all the interactions happen. Because of this, it could happen that triggering a change in one model caused a change in the other model and it was hard to track how far to other models these dependencies go.

Summing up, this kind of data flow is really hard to debug and maintain, so they decided they need to change the architecture completely.

So they designed Flux.

General idea

First of all, you need to have in mind that Flux is an architecture, an idea. There are many implementations of this idea (including the Facebook one), but remember that it’s all about the concept behind them.

And the concept is to have all the data being modified in stores.

Every interaction that causes change in the application state needs to follow this pattern:

  1. create an action – you can think about it as a message with a payload
  2. dispatch the action to the stores using a dispatcher (important: all stores get the message)
  3. in the view, get the store state and update your local state causing the view to rerender

You can have many stores and there is a mechanism to synchronise modifications done by them if you need it.

I recommend that you read a cartoon guide to Flux, the architecture is explained really well there, and the pictures are so cute! :)

Smart and dumb components

A thing worth emphasising is that some components will require their own state. We will call them “smart components”. Others, responsible only for displaying the data and attaching hooks, we could call “dumb components”.

“Smart components” don’t modify their state by themselves – like I mentioned earlier, every state change is done by dispatching an action. They just update their state by using a store’s public getter.

“Dumb components” get the state by passing needed items through props.

Let’s fluxify our app

Let’s add new dependencies to our package.json by running: npm install –save flux events.

Dispatcher

As I said, all state changes need to be done by dispatching actions. We need to create src/AppDispatcher.js then:

Action types

It’s good to have all action types defined in one file. Create a src/constants directory with ActionTypes.js inside:

Action creators

Now we will define the SubmissionActionsCreator:

SubmissionActionsCreator uses AppDispatcher to dispatch needed actions.

As you can see, an action is just a simple Javascript object with data that the store will need to calculate the state change.

An important key that will be always present in action object is actionType – one of the constants listed in the ActionTypes.js file.

Here we also need the submission id and sometimes rate.

Now we can update our smart SubmissionPage component to use SubmissionActionsCreator instead of just directly accessing the API:

Store

And the last thing we need is to add the store where our state will live:

  • getSubmission – a public getter that we will use in our smart component to update its local state based on store state
  • addChangeListener – an interface for subscribing for store state change
  • removeChangeListener – an interface for unsubscribing for store state change
  • emitChange – a private store method for notifying about store state change

Notice also the AppDispatcher.register part, where we do the actual request to the API, update the store state on success and notify all subscribed components that the state has changed.

Now we can update our smart SubmissionPage component to use SubmissionStore.

The whole SubmissionPage class should look like this:

In componentDidMount we use SubmissionActionsCreator to dispatch requestSubmission.

Because in componentWillMount we subscribe for store change using addChangeListener, we will be notified when the submission is loaded from the API.

Remember to unsubscribe in componentWillUnmount.

Thanks to the subscription, the onChange method will be called on store state change. And in onChange method we can update the local state to the current store state then.

Exactly the same mechism is used in performRating.

That’s all!

We updated our application to use the Flux architecture. It’s definitely an improvement over using bare ReactJS. We have more control over the application state.

But it has some downsides too. If the application grows and there are a lot of stores it’s hard to synchronize changes, especially when the stores depend on each other.

I will write more about this in the next post, where we’ll introduce Redux to our application.

For now, you can practise a bit by fluxifying the rest of the application.

Full code accessible here.

See you next week!

Previously in the adventures series

react-logo

In the last post we decided to use following tools:

  1. Server side Javascript rendering – Express as the frontend server
  2. JS written in EcmaScript6 syntax – transpiling ES6 to ES5 using Babel loaded through Webpack
  3. Stylesheets written in Sass – transpiling SASS into CSS using sass-loader for Webpack
  4. All Javascript bundled in one file and all stylesheets bundled in another file – Webpack
  5. To minify assets (js, css) for production – Webpack
  6. A mechanism to watch for changes and transpile on the fly in development mode, to speed up workflow – Webpack
  7. Something to handle external dependencies – npm

Now we’ll learn how to set them up.

Idea

We will be creating a simple application for rating submissions. This is a really simplified version of the application we used for evaluating submissions for a RailsGirls event.

We need a form for creating new submissions:

submission-form

We will display pending, evaluated and rejected submissions in separate but similar listings. All listings will have “first name” and “last name” columns, evaluated submissions will additionally have a “mark” column and rejected will have a “reason” column.

evaluated

The last view that we need is the detailed submission view with the rating.

submistion-details

Dependencies

Firstly, let’s create package.json with the application dependencies:

Take a look at the ‘scripts’ key, it’s where we define the application tasks:

  • babel-node – to be able to write server.js file in ES6
  • start – for starting the server in development mode
  • build – for building production assets
  • production – for starting the server in production mode

To install the specified dependencies run npm install from the console, in the project directory.

Babel 6 requires also .babelrc</code> file. Let’s create it then:</p>

To start the server execute npm start.

To run in production mode execute npm run build first and then npm run production.

Server

As you can see, we are running the server by executing server.js. We need to create it then:

Now let’s understand what different parts of this code do. This line creates an Express application:

Which we’ll configure later:

And then we start the actual server:

Index

By default Express looks for the view to render in the views directory, so let’s create our index.ejs there:

There are two important things going on here. Firstly, this is the div where all of our app will be injected:

Secondly, we attach bundle.js (and bundle.css) only in development:

It’s important to do it only for development because in production we’ll have our assets minified with fingerprints (e.g. bundle-9bd396dbffaafe40f751.min.js). We’ll use the Webpack plugin to inject javascript and stylesheet bundles for production.

Webpack

Development config

We included bundle.js, but we don’t have it yet, so let’s configure Webpack. Create a webpack directory and inside add the file development.config.js:

  • entry – defines entry points, the places from which Webpack starts bundling your application bundles (see the actual value in the shared config below – two entry points, one for stylesheets, one for javascript)
  • output – defines where the output file will be saved, how it will be named and how you can access it from the browser
  • module – defines loaders (for transpiling ES6, sass, etc.)
  • plugins – defines plugins (e.g. we use ExtractTextPlugin to extract the stylesheets to a separate output file)

Some parts will be shared between development and production, so I extracted them to default.config.js:

As you can see, here we configure:

  • how our bundle will be named,
  • on which port our server will start,
  • where our static assets will be served from (we use it in server.js),
  • entries which are the starting points for bundling,
  • loaders which we want to use:
    • babel-loader for ES6,
    • css-loader for ExtractTextPlugin
    • sass-loader for Sass

Production config

As I mentioned, for production we want assets to be minified and attached in HTML with fingerprints. That’s why we need a separate config:

Entry points

We specified entry points to our application as: src/application.js, css/application.scss – but we don’t have them yet. Let’s add them!

Create application.scss in the css directory:

Also download these two css files and save them in the css directory: main.scss, normalize.css. Then create an application.js file in the src directory:

This file is the entry point for our client side application. Notice the render method – it’s responsible for injecting your component tree into the specified element. For us, it’s the div with “app” id.

Routes

In application.js we imported the routes.js file that we don’t have yet.

Let’s create only two routes for now:

This means that when we go to /submissions/new, the SubmissionFormPage component will be rendered. But notice that the route is nested in the / route, which is assigned to the Main component.

It’s because we want Main to be some kind of layout component, with the menu, which will be visible all the time.

And all its child routes will be rendered inside the Main component thanks to the this.props.children directive:

And in SubmissionFormPage we would have the actual form:

Create the above components in src/components directory. As you can see, each ReactJS component has a render method which defines the HTML to be rendered. It’s not pure HTML, it’s HTML in Jsx syntax, to make it easy to write HTML in Javascript code.  

Connection to API

In the above file, you could also notice that when submitting the form we make a request to the backend API. We will use Axios to do this. Let’s create src/lib/Connection.js:

Displaying submissions

To check if everything works, it would be convenient to be able to see the pending submissions list, so let’s create PendingSubmissionsPage:

As you can see here, in componentDidMount we load submissions from the API and assign them to the local component state. Then we pass them to the SubmissionsList component which is responsible for rendering the table. SubmissionsList:

Backend

To have some kind of backend, you can clone and setup this very simplified backend app. Just follow instructions in the README.

Starting the app!

Now we can finally test if everything works. Run npm start in the console, and go to http://localhost:3000 in your browser.

Rating

Now we can implement the rating feature itself.
Let’s add SubmissionPage:

Again, in componentDidMount we load particular submissions from the API and assign them to the local component state. But the most important part is this:

We pass performRating handler as props to the Rate component:

And again pass performRating further, to the RateButton component, where we have actual rate value defined.

Here, finally, we have it bound to the onClick event because only here do we know the particular value for a rating – this.props.value

Thanks to that, when a user clicks a rate button, the performRating method defined in SubmissionPage is called and a request to the API is made.

Let’s add a route to the src/routes.js to be able to access the view:

That’s all!

We just created a simple application using bare React.
The important thing to notice is that we hold the state of the app in many places. In a more complicated application, this can cause a lot of pain :)

In the next post, we’ll update our app to use a more structured pattern for managing the state – Flux.

For now, you can practise a bit by adding the missing EvaluatedSubmissionsPage and RejectedSubmissionsPage.

The full code is accessible here.

See you next week!

The goal

ReactJS has become very popular recently, the community is growing fast and more and more sites use it, so it seems like something worth learning. That’s why I decided to explore it.

There is so many resources and so many examples over the internet that it’s difficult to wrap your head around it all, especially when you are new to the modern frontend stack.

There are examples with ES6 syntax, without ES6 syntax, with old react-router syntax, with new react-router syntax, examples with universal apps, with non-universal apps, with Grunt, with Gulp, with Browserify, with Webpack, and so on. I was confused with all of that. It was hard to establish what is the minimal toolset needed to achieve my goal.

And the goal was: to create a universal application with development and production environments (with minified assets in production).

This post is the first of the series describing my journey while learning modern Javascript tools. It has the form of a tutorial how to create a universal app using bare react, then Flux and lastly Redux.

Why universal? What does it mean? Do I need this?

The easiest way to create a ReactJS app is just to have an index.html file with ReactJS library included as regular Javascript file.

It seems easy, so why have I seen example applications which have their own frontend servers? I started wondering why would I even need the server if I can just have a simple HTML file.

And the answer is: sure, you can create a modern dynamic application just by using simple HTML file, but you need to keep in mind that it’s content will be rendered on the client side only.

It means that if you view the page source in the browser, or make a curl request to your site, all you will see is your main div where the app is injected, but the div itself will be empty.

If the above doesn’t convince you, then perhaps this will: Google bots won’t see the content of your app if it’s only rendered on the client side. So if you care about SEO, you should definitely go with a universal app – an app which is not only rendered dynamically on the client side but also on the server side.

To achieve this you need a separate server for frontend.

You can see people referring to these kinds of apps as isomorphic. Universal is just a new, better name.

Modern Javascript tools

My goal was to create a separate frontend app with following characteristics:

  1. Server side Javascript rendering. So we need a server for this.
  2. JS scripts written in EcmaScript6 syntax. So we need something to transpile ES6 to ES5 (ES6 is not fully supported in browsers yet).
  3. Stylesheets written in Sass. So we need something to transpile SASS into CSS.
  4. All Javascript bundled in one file and all stylesheets bundled in another file. So we need a file bundler of some sort.
  5. Assets (js, css) minified for production.
  6. A mechanism to watch for changes and transpile on the fly in development mode, to speed up work flow.
  7. Something to handle external dependencies.

After looking at many examples on the Internet, my mind looked like this:

JS words cloud

I didn’t know what all of these tools do exactly and which of them I needed. E.g. do I need “browser-side require() the Node.js way” if I already decided to use ES6? Do I need Bower if I already have npm? Do I need Gulp at all?

After lots of reading I finally managed to group the tools:

words cloud grouped

EcmaScript6 (ES6)

ES6 is the new Javascript syntax, standardised in 2014. Although it’s not implemented in all browsers yet, you can already use it. What you need it to somehow transform it to currently implemented Javascript standard (ES5). If you are familiar with CoffeeScript, it’s the same process – you write using one syntax and use a tool, e.g. Babel to translate it to another. This process has a fancy name – transpilation.

As ES6 introduces lots of convenient features which will soon be implemented in browsers, in my opinion there is no need to use CoffeeScript right now. That’s why I choose to use ES6.

Module definitions

One of many convenient features of ES6 is the ability to define modules in a convenient and universal way.

Javascript didn’t have any native mechanism capable of managing dependencies before. For a long time the workaround for this was using a mix of anonymous functions and the global namespace:

Unfortunately, it didn’t specify dependencies between files. The developer was responsible for establishing the correct order of included files by hand.

As you can suspect, it was very error prone.

CommonJS

That’s why the CommonJS committee was created with a goal to create a standard for requiring modules.

It was implemented in Node.js. Unfortunately, this standard works synchronously. Theoretically it means that it’s not well adapted to in-browser use, given that the dynamic loading of the Javascript file itself has to be asynchronous.

AMD

To solve this problem, a next standard was proposed – Asynchronous Module Definition (AMD).

It has some disadvantages, though. Loading time depends on latency, so loading dependencies can take long too.

Incoming HTTP/2 standard is meant to drastically reduce overhead and latency for each single request, but until that happens, some people still prefer the CommonJS synchronous approach.

While setting up Babel you can choose which module definition standard you want to have in the transpiled output. The default is CommonJS.

So when you define you module in new ES6 syntax:

It will be translated to the chosen standard.

If you’ve chosen CommonJS the above module would be transpiled to:

And for AMD:

Module loaders

Having standards for defining modules is one thing, but the ability to use it in the Javascript environment is another.

To make it work in the environment of your choice (browser, Node.js etc.) you need to use a module loader. So module loader is a thing that loads your module definition in the environment.

There are many available options you can choose from: RequireJS, Almond (minimalistic version of RequireJS), Browserify, Webpack, jspm, SystemJs.

You just need to choose one and follow the documentation on how to define your modules.

For example, RequireJS supports the AMD standard, Browserify by default CommonJS, Webpack and jspm support both AMD and CommonJS, and SystemJS supports CommonJS, AMD, System.register and UMD.

Dependencies

Your app usually depends on some libraries. You could just download and include all of them in your files, but it’s not very convenient and quickly gets out of hand in larger projects.

There are a few tools for dependency management. If you use Node.js, you are probably familiar with it’s package manager – npm.

Another very popular one is Bower.

Since I needed to use Node.js to implement the frontend server, I decided to go with npm.

Shimming

In npm, all libraries are exported in the same format. But, of course, it can happen that the library you want to use is not available via npm, but only via Bower.

In such chase remember that some of the libraries may be exported in a different format than what you’re using in your application (e.g. as globals).

In order to use those libraries, you need to wrap them in some kind of adapting abstraction. This abstraction is called a shim.

Please check your module loader documentation how to do shimming.

Task runners

If you use npm you can define simple tasks in your top-level package.json file.

It’s convenient as a starting point, but if your app grows it may not be sufficient anymore. If you need to specify many tasks with dependencies between them, I recommend one of popular task runners such as Gulp or Grunt.

Template engines

Template engines are useful if you need to have dynamically generated HTML. They enable you to use Javascript code in HTML.

If you are familiar with erb you can use ejs. If you prefer haml, you would probably like Jade.

Server

Last but not least I need a server. Node.js has a built-in one, but there is also Express.

Is Express better? What is the difference? Well, with Express you can define routing easily:

It looks really good, but I’ve also seen many examples using routing specific to ReactJS – implemented with react-router.

I wanted to use react-router too, as it seems more ‘ReactJS way’. Fortunately there is a way to combine react-router with Express server by using match method from react-router.

Choices

Summing up, here are my choices matched with characteristics that I defined at the begging of this post:

  1. Server side Javascript rendering – Express as the frontend server
  2. JS scripts written in EcmaScript6 syntax – transpiling ES6 to ES5 using Babel loaded through Webpack
  3. Stylesheets written in Sass – transpiling SASS into CSS using sass-loader for Webpack
  4. All Javascript bundled in one file and all stylesheets bundled in another file – Webpack
  5. Assets (js, css) minified for production – Webpack
  6. A mechanism to watch for changes and transpile on the fly in development mode, to speed up work flow – Webpack
  7. Something to handle external dependencies – npm

Additionally I chose Ejs for the layout template and since I’m using npm and Webpack we don’t really need to bother with grunt or Gulp task runners.

But of course, you can choose differently since there is a lot of other combinations:

choices

Now that we know what we want to use, in the next post we will move on to creating the app. See you next week!

Update: Here is the next post.

You might have already read the great review of Web Summit 2015 by Gosia (aka The Cheerful Designer) and be thinking about going next year? Well, maybe it is time to hear my (aka The Awkward Developer) opinion.

TL;DR Don’t go.

I knew Web Summit with 42k (!) attendees, gazillion startups and bazillion things might not be the best place for introverts or socially awkward individuals like myself. And that the event is not meant for programmers, but rather marketing people, entreprenours and startupers. Yes, I don’t even know how to spell “entrepreneurs”, but still decided to go when my great friends from Amazemeet invited me to join them. It sounded like one-of-a-kind experience. Plus, I wanted to see the “Toys” on show at Machine Summit.

The Crowd

source: www.v3.co.uk/v3-uk/news/2434300/web-summit-top-10-insights

Well, it quickly turned out the Web Summit crowd wasn’t only a problem for introverts. I’m sure 99.9% of attendees felt overwhelmed at some point (and the remaining 0.1% probably just skipped the event and went straight to the pubs). There were lots of helpful volunteers, team members and even law enforcement doing their absolute best to make the whole thing work as smoothly as possible. But with 9 main stages, 10 minutes walk between the 2 main buildings, 15 minutes walk to the Food Summit, no breaks between 15-20 minutes talks, queues and plenty of other attractions  – it was just not possible.

But I don’t give up easily. I decided to come up with (survival) strategies to best spend my time and enjoy the experience as much as possible. Since there were 21 different “summits” happening during the whole 3-day event, there was a lot to choose from.

FashionSummit

source: liveblog.irishtimes.com

  1. Out of my comfort zone: Fashion Summit

My first idea was to go and listen to something I have no clue about and fashion was an obvious choice for me. I was hoping to see other points of view and hear some new stories. But after waiting for 35 minutes (and hardly moving) in the best-dressed queue ever I decided to crawl back to my little world (Code Summit). Still, I’m planning to use this idea during my next conference. And maybe wear nicer shoes.

  1. Big names: Centre Stage

The second idea came from the obvious fact that big conferences have some big speakers from big companies. So, like many others, I decided to watch talks on the Centre Stage, where most famous speakers were invited. Sure, some of the talks (here and everywhere else) sounded just like shameless self-promotion, but generally the products & companies were interesting or the speakers were entertaining enough to keep me listening and enjoying it. And some were really good – just like Gosia, I absolutely loved Creativity by Pixar’s Ed Catmull, Mike Schroepfer talking about Facebook’s bold plans to bring the Internet to remote communities, and many others. On the Centre Stage, I also saw the worst presentation I’ve ever seen – “The CyberPsychology of CyberCrime”, which turned out to be … a cringe-worthy promo for the “CSI: Cyber” tv show, with (surprise!), the  overuse of word “cyber”, no real information and a mandatory Freudian-penetration-penis joke that all the 9-year-olds in the audience found hilarious.

  1. The comfort zone: Code Summit

Code Summit might sound like a perfect safe haven for a developer like me, but I was afraid the talks would be too general or too basic for a rather experienced programmer. My plan was to go there only to listen about security, but I ended up attending a lot of presentations when the Fashion Summit idea didn’t work out. And yes, the talks were a little bit too general, with hardly any code, but the really passionate speakers made it worth the time (Jeff Pulver Remember to breathe, Bryan Liles Application ops ladder, Gautam Rege Gopher it). And then the talks about security started and I was absolutely blown away by Nico Sell, Mikko Hypponen, Eugeny Chereshnev – just to name a few.

Jibo

source: www.jibo.com

  1. The Heaven: Machine Summit

This was my primary reason to attend The Big Conference. The chance to see and maybe even play with Pepper the humanoid robot, Jibo the social robot, mini-drones, the latest wearables, try Audi Oculus Rift Experience, all the `fit-bits for cats` gadgets and so on. I love it all. Discussions about Not so uncanny valley or Robot ethics (which turned out to be about having sex with robots), Cynthia Breazeal’s talk about Rise of the social robots and the Keynote from Pebble were my absolute favorites. Sure, I was tempted to start counting the number of speakers telling the audience that what their company is doing is almost as exciting as Tesla :), but yeah – that was true most of the time.

 

Also, the WiFi and coffee were good. And Dublin has the best pubs and live music. But unless you enjoy shopping on Black Friday, barging your way through the crowd or you love networking so much even overdosing on it still sounds like real fun ….

Don’t go.

Web Summit Dublin 2015

Yes. Basia and I were at Web Summit Conference this year. We went to Dublin – the land of Fairies, Guinness and a huge technology conference. Overall it was great and exhausting; the talks were interesting and inspiring, I discovered how varied the startup stage is and could confirm that, indeed, Guinness tastes better in Ireland.

I got my ticket by entering a competition organised by Amazemeet (blog.amazemeet.com/women-in-tech/). Thanks to that lovely initiative I got to meet two wonderful people: Mike – founder of Amazemeet, a person that has magic powers in finding out best stories from people he just met; and Nádia – also a winner, UX designer, passionate sketchnoter and great small talker.

Web Summit Friendship

 

When there are 2 parallel tracks at a conference we often find ourselves making many hard choices. We try to assess what will be the most interesting topic or from which presentation will we gain more. At the Web Summit there were 7 parallel tracks and at least half of them sounded amazing. There were also dozens of interesting startups, new technologies like Oculus and on top of that plenty of fun people to talk to in the long coffee queues.

I like drawing and I like sharing my experiences, so I chose talks that were most interesting to me and combined some of my sketchnotes from them.

Enjoy!

 

Web Summit Day 1

Day 1

Big Mistake – Andrei Herasimchuk

Should designers learn how to code? The answer given was Yes! While I’m sure we could have a debate about that, I liked the argument that coding gives you the ability to create something and that it’s a super power.

**Chairman of the bored – Chris Moody

** A reminder that creativity means being brave, it means to push the boundaries and innovate. Chris walked us through some popular “safe words” and proposed alternate terms that can activate a more creative approach to problem-solving.

**Venture design: from zero to launch – Ethan Imboden

** Talk about delivering a product in lean iterations; how to use venture design to find this rapid path from the idea to the market.

**Why design is the new engineering – Neil Rimer, David Okuniev, David Tuite, Jeffrey Veen, Mike Davidson

** How design will shape the tech startup ecosystem for years to come. This panel discussion was just full of great quotes:

What a product does is deeply affected by the design.

The design of the product should be its foundation.

_Design comes from the team – co-designers of the UX.

_ Teach empathy at every end point.

*_Flat, fast and fcked up_ – Marcus Woxneryd

** My favourite talk from that day, from creators of Monument Valley, presentation about organisational culture put in 3 points:

1. Flat is Phat (organisation); we know that very well at Lunar, but it’s always nice to hear it from someone else, with a unique approach and ideas.

2. Fast (team); the importance of collaboration skills in teams, for good performance we need purpose, and trust in others. Marcus also pointed out the value of a celebration ritual, having time to appreciate good work, and achieving success (small and big); it’s something that helps to build good teams.

3. F*cked up (individual); everyone is different, we all bring something unique and valuable to the table.  Embrace that in others. Give others feedback, not only “high-five feedback” but also critical.

At the end we were left with the AFGO motto; when you find yourself in a difficult situation, when you feel that you failed at something, think about it as an AFGO – Another F*ucking Growth Opportunity. Because before you succeed you need to fail sometimes, experiment and learn from that. Exploration over strategy.

 

Web Summit Day 2

Day 2

Diplomacy in the digital age – Anne-Marie Tomchak, Jan Melissen, Jane Suiter, Patrick Smyth

A panel about how real-time citizen reporting, data leaks, memes and hashtags have influenced the political landscape. I went to because I didn’t know anything about this topic. It was interesting to hear journalists and diplomats share their experience of how digitalisation is shaping their work. Social media gave diplomats an opportunity to engage in conversation with people, to share information directly, but also created a space where politicians are challenged if they don’t deliver what they promised.

The art of tidiness – Marie Kondo

The author of the bestseller The Life-Changing Magic of Tidying Up talked us through her method of tidying up. At the end of her presentation (given in Japanese) it became clear why she is part of tech conference: Marie will launch her app next year, where you can document your progress in cleaning up, but also become a cleaning consultant. I guess it will mark the moment in the tech industry when there will be an app for everything.

The ultimate selfie – Jacklyn Ford Morie

A talk about what we leave behind in a digital world, how much of our lives we can already capture with quantified-self technology. Jacklyn presented a vision of avatars that we can leave behind us, avatars that were learning about us through our whole life and can continue to represent us after we die. For me, it’s a sci-fi idea that is still far in the future, but the talk showed me that actually a lot is being done in that area of virtual experiences.

Storytelling #emotification – Mary Lee Copeland

Storytelling is re-emerging in tech as a hot trend and buzzword. Great presentation based on a video of James Brown and his storytelling through singing and stage acting. Mary claims that all human beings are storytellers and stories resonate with us. We should use stories to create our branding and user experiences that are engaging and memorable. Tell a story to people, find them when they’re in trouble and make your product a turning point in that story.

 

 

Web Summit Day 3

Day 3

The sixth sense – David Eagleman

A presentation about how narrow our experience of reality is. Research in neuroscience and brain processes led David to create new interfaces, such as a sensory vest.

I highly recommend watching his ted talk about this topic: Can we create new senses for humans?

Dan Brown in conversation with Peter Kafka

I’m not really sure why Dan Brown was there, and I guess he didn’t know either. Nevertheless, it was interesting to listen to him talk about his relationship with science and religion, how it was influenced by his childhood and school experiences. How those two fields are in his opinion using two different languages to tell the same story. As a fun fact, he shared the title of his first novel (created when he was 5 years old) called Giraffe, the Pig, and the Pants on Fire, which definitely sounds like something I would read.

The magazine reimagined – Jefferson Hack, Matt Garrahan, Liam Casey

The story of creating a unique magazine. “Its blend of high fashion and world-class photography with features on the arts, politics and literature continues to make each beautifully crafted edition a collectors’ item.”

To me, it looks like something taken out from Harry Potter’s world. You can see it here: AnOther Magazine presents: ‘A View of the Future’

Creativity – Ed Catmull, Caroline Daniel

Ed Catmull – the President of Pixar, was in a conversation with Caroline Daniel about creativity. It was the closing talk and I was waiting for it the whole day. It was great to listen to him talk about friendship and how failure fuels creativity. “Every film Pixar works on always sucks at first”, so it’s important to create an environment that allows constructive criticism as well as evolving of new ideas, talents and solutions. Ed also spoke about his childishness needed to stay creative, and how having fun is a way to stay passionate.

I face a number of occasions when I describe us as a company. A software development shop. A mobile and web development agency. A web development boutique. A product development services organization. I have used them all in every combination and more.

In fact, I struggle a bit when it comes to defining Lunar Logic’s identity through what we do for our clients. One way of looking at it is that we have software developers, graphic designers, testers, and product owners thus we help to turn ideas into software products. In fact, what our clients often stress is that we shine most when we get involved throughout the whole value stream of product building and not only take care of the software development.

Lunar Team

However, if you asked me what is our ultimate goal when working with our clients, I wouldn’t be talking about software development, UX or high quality. We do take care of all those, but they are just tools we use to reach our goal. That goal is to make our clients happy.

It just so happens that sometimes we make our clients happy by building software. That’s not always true, though.

I’m known to frequently advise our clients to send less work our way than they initially planned. I encourage them to cut down feature list. I propose simplifying initial solutions as much as reasonably possible. In short, I work hard for us to have less work than we could have otherwise.

Why? It’s because we don’t measure our success by how much software we built. We measure our success by how happy our clients were once our part was done. Strategies to optimize for that would mean adopting the ideas of Lean Startup, especially when it comes to rapid experimentation and validating business hypothesis relentlessly.

There’s more to it, though. When we start working with a new client there’s a lot of uncertainty about how the collaboration will look like. That’s why I often recommend a scope that’s even smaller than a Minimal Viable Product. Just a couple of weeks of working together can tell us a lot about how good and how effective our collaboration is. We call that idea the Minimal Indispensable Feature Set.

Then we can decide together whether we are on a good path towards our goal: making our client happy.

In theory, it may mean that we’ll end up doing just a couple weeks work instead of a much longer gig. That’s perfect. We don’t want to be busy all the time. That’s not our goal. Remember? We optimize our work toward clients’ happiness.

One could say that it doesn’t sound like the best strategy for an agency that basically makes money by selling its engineers’ time. Interestingly enough, the opposite seems to be true in our case.

As long as we succeed at keeping our clients happy we get more and more work from our existing and past clients. Referrals are a huge source of our new projects. Every now and then we need to reject new projects because we are fully booked.

The thing is that it’s hard to easily define who we are as a company. A software development shop that discourages their clients from building software. Well, that doesn’t sound usual.

Lunar Team

I think of us more as of a happiness delivery company. We deliver happiness. Normally by building software products. Sometimes by doing pretty much the opposite.

Whichever frame you want to use to describe what we do, either a web agency or software development professional services company, we are not your usual type. And we are vocal about that. The reason is some clients will love this kind of approach. Others will look for something different. Obviously we look for the former and we hope that you are one.

When you look at the list of programming conferences, there is usually a clear distinction between the front and back-end oriented –  but it’s not the case with Full Stack Fest. Organised by Codegram, it merges 2 conferences: Barcelona Ruby Conference and FutureJS with a day of workshops and hackathon in between. This special event was the reason why Maciek and I visited beautiful Barcelona at the beginning of this month. For me, Full Stack Fest was the longest programming conference in my career, and I was really curious how it will be – conference time is usually really intense, and 5 days seems pretty much time.

Talks were very diverse and the schedule was well thought – there was a mixture of strongly technical talks, with lots of code examples and less technical talks, focusing either on soft skills, managing development process or other areas that could be inspiring for software developers. The conference’s schedule was divided into blocks of 2 talks. Between every block, there was a coffee break, breakfast or lunch – great opportunities to meet other participants. Apart from delicious food, there were plenty of coffee and drinks. Between the talks, Liz Abinante did a great job as a master of ceremony, providing necessary information and preparing the audience for talks. I really liked the sequence of talks – I think it helped to maintain a better level of overall focus by providing a variety of stimuli. The beauty and relaxed atmosphere of Barcelona helped us rest in the evenings so that the next day we could wake up ready to dive in the next portion of code.

Baruco

The very first day of the conference was the most intense, but also the most interesting for me, so I decided to write about it in details.  It started as usual with a Ruby celebrity – Yukihiro Matsumoto, this time revealing ideas on Ruby 3.0. He shared a lot of inspiring thoughts – he is a language designer, and he encouraged all the programmers out there to be language designers as well (after all, in the process of writing code we encounter design issues). He also told about improving thread safety, a need for more abstract concurrency model, and presented stream model – you can take a look at the prototype here.

11951699_1242705769088872_123862856957539727_o

Barcelona. Photo by Maciek

After a short break, we had Bryan Liles from Digital Ocean advising on the choice of development strategy. While comparing deployment environment, he used a metaphor of pets vs. cattle. Pet requires help when it gets ill, and there’s only one instance of it while cattle have many identical instances. If something goes wrong, they can be easily replaced. How to recognise the type of your current strategy? Well, if you need to ssh to your server to see logs, then you probably have a pet, not cattle. He also presented a couple of ways to make logs more accessible – mentioned https://getsentry.com/welcome/and http://prometheus.io/.

Then we had a pleasure  to listen to Eileen Uchitelle, who is a Rails contributor and shared with us her experience while trying to speed up Rails integration tests. She bravely took on the topic of measuring tests’ performance, which can be a difficult task – there is a number of benchmarks and profiling tools to choose from, but their output is usually difficult to read and compare. However, it’s worth trying – thanks to benchmarking we can investigate the most time-consuming parts of the code, and significantly improve overall performance. She showed us how to use these tools and recommended to do it in every project.

After lunch, there were two interesting talks focusing on the protocol and related issues. The first one was by Aaron Patterson, and apart from providing us with a serious dose of humor and photos of his gorgeous cats, he told us about Rails request and response lifecycle, what is wrong with it,  and how HTTP/2 could make it better. It introduces a lot of interesting concepts – for example, instead of 4 – 8 in HTTP/1, there’s only one connection – this way, we can track what has been pushed. And since we usually have plenty of assets in our apps and want a fast development environment, this sound promising. One concerning issue is the compatibility, as HTTP/2 is pretty revolutionary. The good news is that they’re planning to focus on backwards compatibility.

The other talk, presented by Aaron Quint focused on improving how the app communicates internally, and externally. The default choice for many developers is JSON, and in the beginning, it work very well indeed. It always starts small, but with time it’s becoming more and more complex and difficult to handle. He showed us a neat solution developed by Google – protocol Buffers described as language and platform-neutral extensible mechanism for serializing structured data. How does it work? You need to define the schema, a proto-format – it has explicit types and it deals with repeated fields, nested types and optional/required fields. It handles removal /addition of fields easily, but the output is not human readable. He also mentioned GRPC – a mobile and HTTP/2 first framework developed by Google using buffers – and TCPEZ – protocol and client-server implementation for a request/response TCP Server, where the client is the load balancer. The last solution has been in prod for 2 years already! The moral of the talk can be sum up in one sentence: don’t accept the community defaults without checking out the alternatives.

Another break, a bit of networking, and finally long awaited talk by Sandi Metz. She started with a little story about the past when she published the book which made her finally quit the job in order to teach people. He topic was  null object pattern, the active nothing. She described her approach to software development in four statements: infected by smalltalk (it was her day job for 12 years!), condition aversed, message-centric and abstraction seeking, then explained all of them in details. With more and more slides, she was revealing why using inheritance is not always a good idea and that it’s made for specialisation, not sharing code. She suggested making classes more alike, isolating the thing that varies and thus showed how composition and dependency injection are correct abstractions in this particular case.

The first day was closed with Rin Raeuber’s talk on the artificial intelligence and neural networks. She presented some basic ideas behind these terms, starting with biology, and then jumping to computer science perspective. She showed simple calculations and how learning with back propagation works – how we choose training data, adjusting weights etc. We also got to know what is the good use of neural networks – pattern recognition, filtering out the noise, signals processing.

The second day of Baruco was very interesting as well.  It started with a talk by Yehuda Katz showing how a naive Rust implementation of a Ruby gem beats the performance of a hand-tuned C version. Then we had Nell Shamrell talking about responsible refactoring – how to distinguish necessary changes from cosmetic ones, and how to evaluate the risk of starting a refactoring session. After the breakfast, we had an inspiring talk delivered by Ernie Miller about the history of building skyscrapers being a metaphor of software development.

Another talk presented by Piotr Solnica was about blending object-oriented and functional approaches. He started with describing his past – as many people from Ruby community, he started with PHP, found Ruby on Rails, and after initial enthusiasm realised it doesn’t work that well when the codebase grows. He told us about functional objects – immutable, having ‘call’ method and no side-effects. In his approach, immutability is a virtue – that’s why he advised to get rid of mutable objects – it is a mind shifting change, the objects must be ready as they can’t be changed later. Functional interfaces built as consistent and side-effectless are a great foundation for  higher level OO abstractions.

After the lunch, there was a talk with a live programming session introducing lambdas in Ruby by Corey Haines. Then we heard a talk by John Cinnamond inspired by Sandi Metz – extreme object-oriented Ruby and learned why we shouldn’t try to create pure OO languages. The last talk of Baruco was presented by Laureen Scott and it mixed programming with poetry. I had no idea they have so much in common! For example, there are some specific constraints in both, white space or a semicolon matters in these two things, it can change everything. Also, “say a lot with a little” applies to both fields.  

Workshop

Both me and Maciek decided to attend React.js workshop, held by Erik Wendel and Kim Joar Bekkelund. It was a well prepared, interesting workshop, focusing on understanding the basic concepts of React, and how it really works. At first we got familiar with the theory behind React (they made a cool presentation, explaining it all!) and then jumped to practical challenges.  Starting with these small exercises made us better prepared for the actual task – creating a real-time monitoring app for Twitter. We didn’t have to bother with setting up the API access for Twitter or Google maps, and we could focus on getting to know React. Erik and Kim were really helpful and gave thorough, satisfactory explanations. If you want to start your own adventure with React.js, the instructions are public. Go and try it! 

FutureJS

FutureJS was even more diverse than Baruco. It was started by a great, informative talk by Rachel Adrew talking about the business of front end development. She started with going back to the beginning of her work, and describing how the web development was changing throughout the time. She expressed fear that due to our reliance of frameworks, we will stop pushing for better solutions. She stated that people should still value old-school fundamentals of web development – to be able to sit with a server and build a simple website. Let’s not become single tool experts! Rachel also presented her ideas on how should a web product be built – it should start with core experience to build progressively enhanced websites.

Functional programming was present in this part of the conference as well. At first, Massimiliano Mantione made an introduction to transducers – functions that take transformation and give another transformation. On the second day, Stefanie Schirmer gave a talk on the basic functional programming concepts, using cooking metaphors.

Ben Foxall during his talk. Photo by Maciek.

Ben Foxall during his talk. Photo by Maciek.

There was one presentation which clearly stood out. Ben Foxall elaborated on the Internet of Things – he said that web browsers can do much more than just presenting the content. He presented a really impressive demo – an app which investigated various parameters of devices connected to the given URL  (battery level, location, device orientation, touch, light and other). Then, after determining the position of every device, he played a sound of singing birds on every device – suddenly, the conference room filled with sounds of the jungle!

Another very impressive talk was delivered by Steven Wittens. I think he presented the best maths visualisations I’ve ever seen – he focused on showing various image transformations (sampling, bilinear filter etc.) and different pixels representations. All presented on interactive, moving models, which brought a lot of applause.

Mikeal Rogers from Node.js Foundation spoke about developing Node.js, and what problems did they encounter while working on it. There is a huge ecosystem, but in 2014 they found themselves in a crisis, as there was no community focus on developing Node’s core, and no collaboration on standards.  He described their need to work out a better process, and that it should be participatory, efficient and transparent.

I must say that I’m a big fan of Baruco. Last year’s conference was awesome (see Tomek’s recap), and it was the same this year with Full Stack Fest. Again, chapeau bas to the organizers – I had an impression that they had a backup plan for every possible thing that could go wrong. They reacted quickly when one of the talks violated the code of conduct and did the same when live captioning problems arose – found an awesome replacement in no time (see this post). Thank you for a great conference, and see you next year!

 

When you look at the list of programming conferences, there is usually a clear distinction between the front and back-end oriented –  but it’s not the case with Full Stack Fest. Organised by Codegram, it merges 2 conferences: Barcelona Ruby Conference and FutureJS with a day of workshops and hackathon in between. This special event was the reason why me and Maciek visited beautiful Barcelona in the beginning of this month. For me, Full Stack Fest was the longest programming conference in my career, and I was really curious how it will be – conference time is usually really intense, and 5 days seems pretty much time.

Talks were very diverse and the schedule was well thought – there was a mixture of strongly technical talks, with lots of code examples and less technical talks, focusing either on soft skills, managing development process or other areas that could be inspiring for software developers. The conference’s schedule was divided into blocks of 2 talks. Between every block, there was a coffee break, breakfast or lunch – great opportunities to meet other participants. Apart from delicious food, there was plenty of coffee and drinks. Between the talks, Liz Abinante did a great job as a master of ceremony, providing necessary information and preparing audience for talks. I really liked the sequence of talks – I think it helped to maintain a better level of overall focus by providing variety of stimuli. The beauty and relaxed atmosphere of Barcelona helped us rest in the evenings, so that the next day we could wake up ready to dive in the next portion of code.

Baruco

The very first day of the conference was the most intense, but also the most interesting for me, so I decided to write about it in details.  It started as usual with a Ruby celebrity – Yukihiro Matsumoto, this time revealing ideas on Ruby 3.0. He shared a lot of inspiring thoughts – he is a language designer, and he encouraged all the programmers out there to be language designers as well (after all, in the process of writing code we encounter design issues). He also told about improving thread safety, a need for more abstract concurrency model, and presented stream model – you can take a look at the prototype here.

11951699_1242705769088872_123862856957539727_o

Barcelona. Photo by Maciek

After a short break, we had Bryan Liles from Digital Ocean advising on the choice of development strategy. While comparing deployment environment, he used a metaphor of pets vs. cattle. Pet requires help when it gets ill, and there’s only one instance of it, while cattle has many identical instances. If something goes wrong, they can be easily replaced. How to recognise the type of your current strategy? Well, if you need to ssh to your server to see logs, then you probably have a pet, not cattle. He also presented a couple of ways to make logs more accessible – mentioned https://getsentry.com/welcome/and http://prometheus.io/.

Then we had a pleasure  to listen to Eileen Uchitelle, who is a Rails contributor and shared with us her experience while trying to speed up Rails integration tests. She bravely took on the topic of measuring tests’ performance, which can be a difficult task – there is a number of benchmarks and profiling tools to choose from, but their output is usually difficult to read and compare. However, it’s worth trying – thanks to benchmarking we can investigate the most time consuming parts of the code, and significantly improve overall performance. She shown us how to use these tools and recommended to do it in every project.

After lunch, there were two interesting talks focusing on the protocol and related issues. The first one was by Aaron Patterson, and apart from providing us with a serious dose of humor and photos of his gorgeous cats, he told us about Rails request and response lifecycle, what is wrong with it,  and how HTTP/2 could make it better. It introduces a lot of interesting concepts – for example, instead of 4 – 8 in HTTP/1, there’s only one connection – this way, we can track what has been pushed. And since we usually have plenty of assets in our apps, and want a fast development environment, this sound promising. One concerning issue is the compatibility, as HTTP/2 is pretty revolutionary. Good news is that they’re planning to focus on backwards compatibility.

The other talk, presented by Aaron Quint focused on improving how the app communicates internally, and externally. The default choice for many developers is JSON, and in the beginning, it work very well indeed. It always starts small, but with time it’s becoming more and more complex and difficult to handle. He shown us a neat solution developed by Google – protocol Buffers, described as language and platform-neutral extensible mechanism for serializing structured data. How does it work? You need to define the schema, a proto-format – it has explicit types and it deals with repeated fields, nested types and optional/required fields. It handles removal /addition of fields easily, but the output is not human readable. He also mentioned GRPC – a mobile and HTTP/2 first framework developed by Google using buffers – and TCPEZ – protocol and client server implementation for a request/response TCP Server, where client is the load balancer. The last solution has been in prod for 2 years already! The moral of the talk can be sum up in one sentence: don’t accept the community defaults without checking out the alternatives.

Another break, a bit of networking, and finally long awaited talk by Sandi Metz. She started with a little story about the past, when she published the book which made her finally quit the job in order to teach people. He topic was  null object pattern, the active nothing. She described her approach to software development in four statements: infected by smalltalk (it was her day job for 12 years!), condition aversed, message centric and abstraction seeking, then explained all of them in details. With more and more slides, she was revealing why using inheritance is not always a good idea and that it’s made for specialisation, not sharing code. She suggested making classes more alike, isolating the thing that varies and thus showed how composition and dependency injection are correct abstractions in this particular case.

The first day was closed with Rin Raeuber’s talk on the artificial intelligence and neural networks. She presented some basic ideas behing these terms, starting with biology, and then jumping to computer science perspective. She showed simple calculations and how learning with back propagation works – how we choose training data, adjusting weights etc. We also got to know what is the good use of neural networks – pattern recognition, filtering out the noise, signals processing.

The second day of Baruco was very interesting as well.  It started with a talk by Yehuda Katz showing how a naive Rust implementation of a Ruby gem beats the performance of a hand-tuned C version. Then we had Nell Shamrell talking about responsible refactoring – how to distinguish necessary changes from cosmetic ones, and how to evaluate the risk of starting a refactoring session. After the breakfast, we had an inspiring talk delivered by Ernie Miller about the history of building skyscrapers being a metaphor of software development.

Another talk presented by Piotr Solnica was about blending object oriented and functional approaches. He started with describing his past – as many people from Ruby community, he started with PHP, found Ruby on Rails, and after initial enthusiasm realised it doesn’t work that well when the codebase grows. He told us about functional objects – immutable, having ‘call’ method and no side-effects. In his approach, immutability is a virtue – that’s why he advised to get rid of mutable objects – it is a mind shifting change, the objects must be ready as they can’t be changed later. Functional interfaces built as consistent and side-effectless are a great foundation for  higher level OO abstractions.

After the lunch, there was a talk with a live programming session introducing lambdas in Ruby by Corey Haines. Then we heard a talk by John Cinnamond inspired by Sandi Metz – extreme object-oriented Ruby and learned why we shouldn’t try to create pure OO languages. The last talk of Baruco was presented by Laureen Scott and it mixed programming with poetry. I had no idea they have so much in common! For example, there are some specific constraints in both, white space or a semicolon matters in these two things, it can change everything. Also, “say a lot with a little” applies to both fields.  

Workshop

Both me and Maciek decided to attend React.js workshop, held by Erik Wendel and Kim Joar Bekkelund. It was a well prepared, interesting workshop, focusing on understanding the basic concepts of React, and how it really works. At first we got familiar with the theory behind React (they made a cool presentation, explaining it all) and then jumped to practical challenges.  Starting with these small exercises made us better prepared for the actual task – creating a real-time monitoring app for Twitter. We didn’t have to bother with setting up the API access for Twitter or Google maps, and we could focus on getting to know React. Erik and Kim were really helpful and gave thorough, satisfactory explanations. If you want to start your own adventure with React.js, the instructions are public.

FutureJS

FutureJS was even more diverse than Baruco. It was started by a great, informative talk by Rachel Adrew talking about the business of front end development. She started with going back to the beginning of her work, and describing how the web development was changing throughout the time. She expressed fear that due to our reliance of frameworks, we will stop pushing for better solutions. She stated that people should still value old-school fundamentals of web development – to be able to sit with a server and build a simple website. Let’s not become single tool experts! Rachel also presented her ideas on how should a web product be built – it should start with core experience to build progressively enhanced websites.

Functional programming was present in this part of the conference as well. At first, Massimiliano Mantione made an introduction to transducers – functions that take transformation and give another transformation. On the second day, Stefanie Schirmer gave a talk on the basic functional programming concepts, using cooking metaphors.

There was one presentation which clearly stood out. Ben Foxall elaborated on the Internet of Things – he said that web browsers can do much more than just presenting the content. He presented a really impressive demo – an app which investigated various parameters of devices connected to the given url  (battery level, location, device orientation, touch, light and other). Then, after determining the position of every device, he played a sound of singing birds on every device – suddenly, the conference room filled with sounds of jungle!

Ben Foxall during his talk. Photo by Maciek.

Ben Foxall during his talk. Photo by Maciek.

Another very impressive talk was delivered by Steven Wittens. I thinks he presented the best maths visualisations I’ve ever seen – he focused on showing various image transformations (sampling, bilinear filter etc.) and different pixels representations. All presented on interactive, moving models, which brought a lot of applause.

Mikael Rogers from Node.js Foundation spoke about developing Node.js, and what problems did they encounter while working on it. There is a huge ecosystem, but in 2014 they found themselves in a crisis, as there was no community focus on developing Node’s core, and no collaboration on standards.  He described their need to work out a better process, and that it should be participatory, efficient and transparent.

I must say that I’m a big fan of Baruco. Last year’s conference was awesome (see Tomek’s recap), and it was the same this year with Full Stack Fest. Again, chapeau bas to the organizers – I had an impression that they had a backup plan for every possible thing that could go wrong. They reacted quickly when one of the talks violated the code of conduct, and did the same when live captioning problems arose – found an awesome replacement in no time (see this post). Thank you for a great conference, and see you next year!

A while ago I found great presentation on code refactoring called “All the little things” from Sandi Metz. The presentation was based on an exercise called The Gilded Rose Kata. It inspired me to play with the kata and here are some after thoughts. For those of you that like to get your hands dirty, I’ve also included a few code examples to help you get started with your own kata exercise.

What is the The Gilded Rose Kata?

Let me first start with explanation of what a code kata actually is. It’s an exercise which helps programmers improve their skills through practice and repetition.

The Gilded Rose Kata is all about two classes Item and GildedRose that you should refactor. Item has name, sell_in and quality attributes. GildedRose class has update_quality method responsible for decreasing sell_in and updating the quality attributes for each item.

The code is messy and has a lot of if statements that need to be resolved. Rules… hmm, they are pretty clear. Let’s get more familiar with them before we jump in any further.

The Gilded Rose Refactoring Kata

Here is the full description of The Gilded Rose Kata I found in Bobby Johnson’s repository:

Hi and welcome to team Gilded Rose. As you know, we are a small inn with a prime location in a prominent city ran by a friendly innkeeper named Allison. We also buy and sell only the finest goods. Unfortunately, our goods are constantly degrading in quality as they approach their sell by date. We have a system in place that updates our inventory for us. It was developed by a no-nonsense type named Leeroy, who has moved on to new adventures. Your task is to add the new feature to our system so that we can begin selling a new category of items. First an introduction to our system:

  • All items have a SellIn value which denotes the number of days we have to sell the item
  • All items have a Quality value which denotes how valuable the item is
  • At the end of each day our system lowers both values for every item

Pretty simple, right? Well this is where it gets interesting:

  • Once the sell by date has passed, Quality degrades twice as fast
  • The Quality of an item is never negative
  • “Aged Brie” actually increases in Quality the older it gets
  • The Quality of an item is never more than 50
  • “Sulfuras”, being a legendary item, never has to be sold or decreases in Quality
  • “Backstage passes”, like aged brie, increases in Quality as it’s SellIn value approaches; Quality increases by 2 when there are 10 days or less and by 3 when there are 5 days or less but Quality drops to 0 after the concert

We have recently signed a supplier of conjured items. This requires an update to our system:

  • “Conjured” items degrade in Quality twice as fast as normal items

Feel free to make any changes to the UpdateQuality method and add any new code as long as everything still works correctly. However, do not alter the Item class or Items property as those belong to the goblin in the corner who will insta-rage and one-shot you as he doesn’t believe in shared code ownership (you can make the UpdateQuality method and Items property static if you like, we’ll cover for you).

Just for clarification, an item can never have its Quality increase above 50, however “Sulfuras” is a legendary item and as such its Quality is 80 and it never alters.

Let’s play with The Gilded Rose Kata

I was looking for example in ruby and I found one in Emily Bache’s repository. Here is the code we need to refactor.

The first thing I had to do before rewriting the above code was to prepare a test suite to ensure my changes wouldn’t break the item rules. I simply added rspec and wrote the tests. There are plenty of them, if you want you can check out the specs here.

I was pretty sure every rule was covered in test suite so I made my first attempt to refactor the code. After I had done some work improving the code and I was still facing a green test suite a thought came to my mind.

Created by ArturT - https://github.com/ArturT/GildedRose-Refactoring-Kata

 

They call it Golden Master

We used to run dojo workshop at Lunar and we used a clever technique called Golden Master Testing to record the behaviour of the program. We recorded bunch of input examples and output results from the program we wanted to refactor. The recorded data was used to check if the refactored code behaves in the same way. It’s great when you have to deal with legacy code and you don’t have test suite. At least if you can prepare seed input for the program and collect outputs. I wrote script texttest_fixture.rb that creates all kinds of items and runs the update_quality method for a given number of days. Below you will find the output for 2 days.

$ ruby texttest_fixture.rb 2
OMGHAI!
-------- day 0 --------
name, sellIn, quality
+5 Dexterity Vest, 10, 20
Aged Brie, 2, 0
Elixir of the Mongoose, 5, 7
Sulfuras, Hand of Ragnaros, 0, 80
Sulfuras, Hand of Ragnaros, -1, 80
Backstage passes to a TAFKAL80ETC concert, 15, 20
Backstage passes to a TAFKAL80ETC concert, 10, 49
Backstage passes to a TAFKAL80ETC concert, 5, 49
-------- day 1 --------
name, sellIn, quality
+5 Dexterity Vest, 9, 19
Aged Brie, 1, 1
Elixir of the Mongoose, 4, 6
Sulfuras, Hand of Ragnaros, 0, 80
Sulfuras, Hand of Ragnaros, -1, 80
Backstage passes to a TAFKAL80ETC concert, 14, 21
Backstage passes to a TAFKAL80ETC concert, 9, 50
Backstage passes to a TAFKAL80ETC concert, 4, 50

Of course in our case more reasonable amount of days would be higher so we can cover more possible cases. I wrote a golded_master_spec.rb file that executes texttest_fixture.rb file for 100 days and generates nice readable it examples like below:

$ rspec spec/golden_master_spec.rb
Golden Master for GildedRose
match line 0: OMGHAI! should equal OMGHAI!
match line 1: -------- day 0 -------- should equal -------- day 0 --------
match line 2: name, sellIn, quality should equal name, sellIn, quality
match line 3: +5 Dexterity Vest, 10, 20 should equal +5 Dexterity Vest, 10, 20
match line 4: Aged Brie, 2, 0 should equal Aged Brie, 2, 0
match line 5: Elixir of the Mongoose, 5, 7 should equal Elixir of the Mongoose, 5, 7
match line 6: Sulfuras, Hand of Ragnaros, 0, 80 should equal Sulfuras, Hand of Ragnaros, 0, 80
match line 7: Sulfuras, Hand of Ragnaros, -1, 80 should equal Sulfuras, Hand of Ragnaros, -1, 80
match line 8: Backstage passes to a TAFKAL80ETC concert, 15, 20 should equal Backstage passes to a TAFKAL80ETC concert, 15, 20
match line 9: Backstage passes to a TAFKAL80ETC concert, 10, 49 should equal Backstage passes to a TAFKAL80ETC concert, 10, 49
match line 10: Backstage passes to a TAFKAL80ETC concert, 5, 49 should equal Backstage passes to a TAFKAL80ETC concert, 5, 49
match line 11: should equal

Golden Master to the rescue!

It turned out the golden master tests were failing on my refactored code. It means I made a mistake somewhere. My previously written specs were green but it seems like I didn’t cover everything. What was that? I checked the lines where the golden master tests were failing. I realized that I missed the case when an item with high quality=49 can’t reach quality greater than 50 but it should be able to reach a max quality of 50.

The rule for “Backstage passes” item says:

“Backstage passes”, like aged brie, increases in Quality as it’s SellIn value approaches; Quality increases by 2 when there are 10 days or less and by 3 when there are 5 days or less but Quality drops to 0 after the concert

I added missing tests to my rspec test suite and fixed the refactored code to make it pass.

What I’ve learned

Don’t trust myself too much. Don’t trust the tests I wrote. Always look for a way to prove if I’m wrong. The Golden Master Testing technique helped me with that.

What else have learned? There are plenty of little things pointed by Sandi Metz that helped me with refactoring code step by step.

  • Make smaller things – it’s obvious when you see so many if statements then you know it’s not good to leave them like that. They’re hard to read, hard to understand.
  • Duplication is far cheaper than the wrong abstraction – don’t be afraid to duplicate code. You are learning how to refactor the code and the abstraction is not yet clear until you understand exactly what your program does. Just don’t get stuck with wrong abstraction.
  • Keep SOLID principles in mind – we would like to have an easy way to add a new Item with different rules. It would be great to have the code open for extension in that case. And even better to have the code closed for modification at the same time so there won’t be the need to change existing code when adding a new item.
  • Things get worse always before they get better – intermediate steps during refactoring may look like they make things more complicated until you reach the point when you can get rid of complexity.

I did a second attempt to refactor code and I extracted a few smaller classes. I made the tests pass and I had a lot of fun with that.

Now it’s your turn

I prepared The Gilded Rose Kata repository with a ready to go test suite. If you want to tackle the exercise, you can clone it and switch to the “ready-to-start-exercise” branch .

$ git clone git@github.com:ArturT/GildedRose-Refactoring-Kata.git
$ cd GildedRose-Refactoring-Kata
$ git checkout ready-to-start-exercise
$ cd ruby
$ bundle install
# run tests prepared by me
$ rspec spec/gilded_rose_spec.rb
# run golden master tests
$ rspec spec/golden_master_spec.rb

This way you can run tests to ensure changes you make in the gilded_rose.rb file won’t break the test suite.

In the repository you will also find my first attempt and second with refactored code. Don’t open gilded_rose_refactored_1.rb and gilded_rose_refactored_2.rb unless you like spoilers!

All the little things

If you played with kata already then watch Sandi Metz’s video and check how she did the refactor. Hope you like it!

Oh, and by the way if you are interested what else I’m doing on daily basis you can check the knapsack gem and read more about it in my recent blog post.

A while ago I found great presentation “All the little things” from Sandi Metz about refactoring code based on The Gilded Rose Kata. It inspired me to play with the kata so I did and here are some thoughts afterwards with a few code examples that help you start with your own kata exercise.

What is the The Gilded Rose Kata?

Let me first start with explanation of what a code kata actually is. It’s an exercise which helps programmers improve their skills through practice and repetition.

The Gilded Rose Kata is all about two classes Item and GildedRose that you should refactor. Item has name, sell_in and quality attributes. GildedRose class has update_quality method responsible for decreasing sell_in and updating the quality attributes for each item.

The code is messy and has a lot of if statements that need to be resolved. Rules… hmm, they are pretty clear. Let’s get more familiar with them before we jump in any further.

The Gilded Rose Refactoring Kata

Here is the full description of The Gilded Rose Kata I found in Bobby Johnson’s repository:

Hi and welcome to team Gilded Rose. As you know, we are a small inn with a prime location in a prominent city ran by a friendly innkeeper named Allison. We also buy and sell only the finest goods. Unfortunately, our goods are constantly degrading in quality as they approach their sell by date. We have a system in place that updates our inventory for us. It was developed by a no-nonsense type named Leeroy, who has moved on to new adventures. Your task is to add the new feature to our system so that we can begin selling a new category of items. First an introduction to our system:

  • All items have a SellIn value which denotes the number of days we have to sell the item
  • All items have a Quality value which denotes how valuable the item is
  • At the end of each day our system lowers both values for every item

Pretty simple, right? Well this is where it gets interesting:

  • Once the sell by date has passed, Quality degrades twice as fast
  • The Quality of an item is never negative
  • “Aged Brie” actually increases in Quality the older it gets
  • The Quality of an item is never more than 50
  • “Sulfuras”, being a legendary item, never has to be sold or decreases in Quality
  • “Backstage passes”, like aged brie, increases in Quality as it’s SellIn value approaches; Quality increases by 2 when there are 10 days or less and by 3 when there are 5 days or less but Quality drops to 0 after the concert

We have recently signed a supplier of conjured items. This requires an update to our system:

  • “Conjured” items degrade in Quality twice as fast as normal items

Feel free to make any changes to the UpdateQuality method and add any new code as long as everything still works correctly. However, do not alter the Item class or Items property as those belong to the goblin in the corner who will insta-rage and one-shot you as he doesn’t believe in shared code ownership (you can make the UpdateQuality method and Items property static if you like, we’ll cover for you).

Just for clarification, an item can never have its Quality increase above 50, however “Sulfuras” is a legendary item and as such its Quality is 80 and it never alters.

Let’s play with The Gilded Rose Kata

I was looking for example in ruby and I found one in Emily Bache’s repository. Here is the code we need to refactor.

The first thing I had to do before rewriting the above code was to prepare a test suite to ensure my changes wouldn’t break the item rules. I simply added rspec and wrote the tests. There are plenty of them, if you want you can check out the specs here.

I was pretty sure every rule was covered in test suite so I made my first attempt to refactor the code. After I had done some work improving the code and I was still facing a green test suite a thought came to my mind.

Created by ArturT - https://github.com/ArturT/GildedRose-Refactoring-Kata

 

They call it Golden Master

We used to run dojo workshop at Lunar and we used a clever technique called Golden Master Testing to record the behaviour of the program. We recorded bunch of input examples and output results from the program we wanted to refactor. The recorded data was used to check if the refactored code behaves in the same way. It’s great when you have to deal with legacy code and you don’t have test suite. At least if you can prepare seed input for the program and collect outputs. I wrote script texttest_fixture.rb that creates all kinds of items and runs the update_quality method for a given number of days. Below you will find the output for 2 days.

 $ ruby texttest_fixture.rb 2
 OMGHAI!
 -------- day 0 --------
 name, sellIn, quality
 +5 Dexterity Vest, 10, 20
 Aged Brie, 2, 0
 Elixir of the Mongoose, 5, 7
 Sulfuras, Hand of Ragnaros, 0, 80
 Sulfuras, Hand of Ragnaros, -1, 80
 Backstage passes to a TAFKAL80ETC concert, 15, 20
 Backstage passes to a TAFKAL80ETC concert, 10, 49
 Backstage passes to a TAFKAL80ETC concert, 5, 49
 -------- day 1 --------
 name, sellIn, quality
 +5 Dexterity Vest, 9, 19
 Aged Brie, 1, 1
 Elixir of the Mongoose, 4, 6
 Sulfuras, Hand of Ragnaros, 0, 80
 Sulfuras, Hand of Ragnaros, -1, 80
 Backstage passes to a TAFKAL80ETC concert, 14, 21
 Backstage passes to a TAFKAL80ETC concert, 9, 50
 Backstage passes to a TAFKAL80ETC concert, 4, 50

Of course in our case more reasonable amount of days would be higher so we can cover more possible cases. I wrote a golded_master_spec.rb file that executes texttest_fixture.rb file for 100 days and generates nice readable it examples like below:

 $ rspec spec/golden_master_spec.rb
 Golden Master for GildedRose
 match line 0: OMGHAI! should equal OMGHAI!
 match line 1: -------- day 0 -------- should equal -------- day 0 --------
 match line 2: name, sellIn, quality should equal name, sellIn, quality
 match line 3: +5 Dexterity Vest, 10, 20 should equal +5 Dexterity Vest, 10, 20
 match line 4: Aged Brie, 2, 0 should equal Aged Brie, 2, 0
 match line 5: Elixir of the Mongoose, 5, 7 should equal Elixir of the Mongoose, 5, 7
 match line 6: Sulfuras, Hand of Ragnaros, 0, 80 should equal Sulfuras, Hand of Ragnaros, 0, 80
 match line 7: Sulfuras, Hand of Ragnaros, -1, 80 should equal Sulfuras, Hand of Ragnaros, -1, 80
 match line 8: Backstage passes to a TAFKAL80ETC concert, 15, 20 should equal Backstage passes to a TAFKAL80ETC concert, 15, 20
 match line 9: Backstage passes to a TAFKAL80ETC concert, 10, 49 should equal Backstage passes to a TAFKAL80ETC concert, 10, 49
 match line 10: Backstage passes to a TAFKAL80ETC concert, 5, 49 should equal Backstage passes to a TAFKAL80ETC concert, 5, 49
 match line 11: should equal

Golden Master to the rescue!

It turned out the golden master tests were failing on my refactored code. It means I made a mistake somewhere. My previously written specs were green but it seems like I didn’t cover everything. What was that? I checked the lines where the golden master tests were failing. I realized that I missed the case when an item with high quality=49 can’t reach quality greater than 50 but it should be able to reach a max quality of 50.

The rule for “Backstage passes” item says:

“Backstage passes”, like aged brie, increases in Quality as it’s SellIn value approaches; Quality increases by 2 when there are 10 days or less and by 3 when there are 5 days or less but Quality drops to 0 after the concert

I added missing tests to my rspec test suite and fixed the refactored code to make it pass.

What I’ve learned

Don’t trust myself too much. Don’t trust the tests I wrote. Always look for a way to prove if I’m wrong. The Golden Master Testing technique helped me with that.

What else have learnt? There are plenty of little things pointed by Sandi Metz that helped me with refactoring code step by step.

  • Make smaller things – it’s obvious when you see so many if statements then you know it’s not good to leave them like that. They’re hard to read, hard to understand.
  • Duplication is far cheaper than the wrong abstraction – don’t be afraid to duplicate code. You are learning how to refactor the code and the abstraction is not yet clear until you understand exactly what your program does. Just don’t get stuck with wrong abstraction.
  • Keep SOLID principles in mind – we would like to have an easy way to add a new Item with different rules. It would be great to have the code open for extension in that case. And even better to have the code closed for modification at the same time so there won’t be the need to change existing code when adding a new item.
  • Things get worse always before they get better – intermediate steps during refactoring may look like they make things more complicated until you reach the point when you can get rid of complexity.

I did a second attempt to refactor code and I extracted a few smaller classes. I made the tests pass and I had a lot of fun with that.

Now it’s your turn

I prepared The Gilded Rose Kata repository with a ready to go test suite. If you want to tackle the exercise, you can clone it and switch to the “ready-to-start-exercise” branch .

$ git clone git@github.com:ArturT/GildedRose-Refactoring-Kata.git
$ cd GildedRose-Refactoring-Kata
$ git checkout ready-to-start-exercise
$ cd ruby
$ bundle install
# run tests prepared by me
$ rspec spec/gilded_rose_spec.rb
# run golden master tests
$ rspec spec/golden_master_spec.rb

This way you can run tests to ensure changes you make in the gilded_rose.rb file won’t break the test suite.

In the repository you will also find my first attempt and second with refactored code. Don’t open gilded_rose_refactored_1.rb and gilded_rose_refactored_2.rb unless you like spoilers!

All the little things

If you played with kata already then watch Sandi Metz’s video and check how she did the refactor. Hope you like it!

Oh, and by the way if you are interested what else I’m doing on daily basis you can check the knapsack gem and read more about it in my recent blog post.

Have you ever dreamt about a workplace, where your personality really matters? Where your sensitivity and honesty is treated as a primary value. Where being open and inquisitive is an important skill?

InternshipQALunarLogic

Are you an awesome team player and love to spend time working with other people? And you are interested in User Experience and can easily fit into many roles.

Join us!

What we offer:

  • Support on your learning path
  • An unusual work environment with: kudos, badges, board games, etc.
  • A lot of funLunarLogic-InternshipQA
  • Paid

What we expect:

  • Passion for learning
  • Creativity
  • Critical thinking
  • Communication skills
  • Very good English

 

Apply for a QA internship »

*Internships are planned for 3 months and are based in Krakow.

I guess the most interesting bit of our open salaries story is exactly how the change looked. I already mentioned that this was an important part of the preparation process, so it definitely wasn’t a gung-ho kind of thing.

First of all, open salaries was an opt-in program. No one was forced to join. Not joining would mean that others won’t know about that person’s salary and that person wouldn’t have access to the salary list.

The explanation is fairly simple. Everyone who joined Lunar was signing up to a company with non-transparent salaries. This might have been an important part of a deal for them. I didn’t want to force that change on anyone.

For anyone who joins the company after making salaries transparent, joining the open salaries program would be automatic.

One part is transparency, the other is having influence. The latter was much more difficult to design. Before the change pretty much no one had any experience deciding on salaries, so with no control raises could go through the roof. Well, theoretically at least.

Another, more important thing was that I wasn’t ready to give up full control. At the same time, giving up control eventually was the ultimate goal of the process.

We ended up designing three stages. The first one would be launched along with making salaries transparent. For the other two the trigger would be whenever we feel like we are ready.

During the first stage anyone who is in open salaries program can propose a raise for anybody, including themselves of course. This should be a concrete proposal; What kind of raise we are talking about exactly and why they think it is a good decision.

Everyone involved in open salaries is invited to share their opinion, either supportive or critical. Or propose a different solution, e.g. a different raise. The discussion happens in writing, in a shared document, so that we can refer to it later and can weigh in when we really thought it through. Finally, the decision is still made by me. The difference is that we have a very open and a very inclusive advisory process.

In fact, it is a variation of our decision making process.

The second stage will be different in a way that the decision would be made by the person who kicked off a discussion. There will be, however, overall financial constraints enforced by me. It would be in a form: “the budget for raises in the next quarter is no more than X.”

In the third stage we’ll remove the budgetary constraint. By that point it will purely be our decision making process.

Open-Salaries-Transparency-Lunar-Logic

As I mentioned there’s no schedule for going through the stages. I even say that it is possible that the last one will never happen as we’ll never decide that we’re completely ready to remove all the constraints.

The crucial part of the process is a discussion. We didn’t choose to use an algorithm to decide who earns what. The reason is that I have yet to see an algorithm that  addresses what we value. Typically these algorithms would stress technical skills, experience, seniority, etc. We, on the other hand, pay a lot of attention to organizational culture, collaboration, and helping others to get better. It is difficult to quantify such things in a reasonable way.

We ultimately try to combine all our subjective opinions into an outcome that feels fair for everyone. Yes, that means that there are difficult discussions ahead. That’s why there’s one underlying principle for all the discussions: be respectful to everyone.

It is also an invitation for everyone to get involved. If someone is not happy with a proposal, and they don’t speak up, they can’t blame anyone else when a decision is finally made.

That’s pretty much it. One technical detail for launching open salaries was that opt-in decisions could be made for the whole day before the list was published. On one hand, it’s just convenience. On the other, it means that when open salaries are launched the list will be fairly full.

And of course even if someone decides not to join open salaries program at its kick-off they may join any time they want.

The whole plan was put into scrutiny of the whole company so that I could make corrections before launching it. There was nothing major that popped up though.

Make Your Specs Faster with Poltergeist. And Fail.

Some time ago we decided to make our acceptance tests faster. We were using Cucumber with Selenium and we replaced it with the Poltergeist driver. Poltergeist uses the PhantomJS engine and thanks to that our tests run around three times faster then they did before. Everything works smoothly on our machines, but there is one small problem; Sometimes, in some steps PhantomJS crashes on CircleCI :).

This forces us to click “rebuild” a few times in a row. This doesn’t make our tests faster, but the direction is good. So what could we do? We could:

Rerunning Cucumber Scenarios

  1. Connect with CircleCI VM using SSH.
  2. Download the crash dump.
  3. Notice that it contains sensitive data.
  4. Report the crash by creating an issue on GitHub. (I surrender)
  5. Wait for someone to fix it or fix it by ourselves.
  6. Wait for a new version of Poltergeist.
  7. Wait for CircleCI to update their Poltergeist version.

Or maybe…

Rerun Failing Specs

Cucumber, as with most of the testing tools out there, allows you to choose an output format. What’s more, is it has one specific format called rerun which writes a list of failing scenarios to a specified file.

cucumber -f rerun --out failing_scenarios.txt

Once you have this file, you can run these scenarios again:

cucumber @failing_scenarios.txt

It’s as easy as that! Let’s write rake tasks which do this:

cucumber

At the beginning I was afraid that this would not work with parallel nodes; failing_scenarios.txt shouldn’t be shared between them. But every CircleCI node is an independent virtual machine, with it’s own filesystem, so every node has separate file.

Now you can type rake failing_cucumber_specs:record and rake failing_cucumber_specs:rerun.

I also updated the test section of circle.yml:

It’s good idea to put failing_scenarios.txt to the .gitignore file before committing changes.

Usage with Knapsack

We use Knapsack (written by Artur Trzop) which splits tests among multiple nodes. Knapsack has it’s own adapter for Cucumber, so I had to modify the failing_cucumber_specs:record task. Here is a version for Knapsack:

knapsack-logo

Possible Troubles

Exit 0 Is Not a Perfect Solution

If you look closely at the rerun task, you can see exit 0 after running Cucumber. We must return a successful exit code, because we don’t want our build to be interrupted during recording failing scenarios. The problem with Cucumber is that it returns 1 when some scenarios fail as well as when it fails itself for any reason. Imagine such a situation:

  1. Cucumber doesn’t run specs, creates an empty failing scenarios file and crashes.
  2. CircleCI doesn’t notice that, because we force exit 0.
  3. Second Cucumber execution run specs from an empty file. No specs, so it returns 0.
  4. Build is green.

Fortunately, the first point seems to be very unlikely. Even if Cucumber fails for any reason other than red specs (that’s already unlikely), it doesn’t create an empty file, so the second Cucumber run fails. However there was a feature request regarding Cucumber exit status codes. It’s implemented and merged to the master branch so in future releases we will be able to determine whether scenarios failed (exit status 1) or application returned error (exit status 2).

Less Trust in Specs

Imagine some functionality which doesn’t work as expected from time to time, let’s say, because of a race condition. This problem could be noticed when it’s test fail. Rerunning failing tests decreases the probability of detecting this issue. I don’t think it’s a huge problem in our case as I’ve never encountered this in any project I was working on at our company, but I feel obligated to mention this.

Once I started playing with the idea of making salaries at Lunar Logic transparent, the question that popped up almost instantly was: when? Well, one thing that I should probably start with was to ask whether we want to do that at all.

If I simply ask the latter out of the blue I would likely get mostly negative feedback. This is a huge change and one that may make people uncomfortable.

The answer for “when” question would thus likely be “whenever we are ready.”

I started with sharing with everyone that there is an idea to go transparent with salaries at some point in the future. I got some early feedback on that, including concerns on what would happen once we have open salaries.

This gave me confirmation that transparency alone is not enough and there has to be an accompanying mechanism that allows people to influence how salaries are set.

Then there was a question of how fair we were with salaries at that moment. Well, from my perspective it was fairly fine of course. However, the moment of making salaries open to everyone would mean that we are suddenly taking into consideration everyone’s opinion, and not only mine.

I asked a few people at the company to prepare their abstract salary list. They could use whatever reference point they wanted, either they own salary or just a completely abstract number, like 100. Then they were supposed to relatively set other employees on the scale. I wasn’t very strict with that. “No opinion” was a perfect option, as well as an incomplete list or partial information, if someone decided to use a whole set of factors.

I gave that task to a few people in different roles and of different characters to get a possibly broad set of data. They were working individually.

The outcome was some sort of an abstract, and partial, verification how my views on salaries differ from opinions of others.

One result of that was my further work on flattening the salary scale – a process that was in place for some time already. There were a couple of cases when I realized that someone should have gotten a raise already and fixed that too.

Open Salaries - When

Concurrently more and more informal chats around the idea were happening. Given a well-thought approach to the process, more and more people were buying into the idea. At some point, I felt that majority of us were supporting the idea.

The last missing bit was figuring out how the change to transparent salaries would happen and what will be the mechanism of influencing salaries from that point on. On one hand, this part wasn’t easy. On the other, stories from companies that are already doing that are available. A few examples that come to my mind would be: Buffer, Semco, and a few case studies covered in Reinventing Organizations.

I used these stories more as an inspiration than a recipe. Eventually, I ended up with an idea that was ready to put under the scrutiny of the whole team.

We were ready.

By the way, if you are interested, the whole preparation process took 9-10 months.

Transparent salaries are becoming increasingly popular. I know more and more companies that decide to change the traditional approach and make salaries known within the organization. Buffer goes as far as publishing their salaries to the world on their blog.

We did the same at Lunar Logic.

Pursuing open salaries just for the sake of doing it doesn’t make sense, though. What were our motivations to go down that path?

Transparency - Open Salaries

 

“Decentralizing control requires decentralizing both the authority to make decisions and the information required to make these decisions correctly.”

Don Reinertsen

First of all, we are evolving toward no management. This means that we want everyone to be involved in everyday management. Now, for a service company, such as Lunar, more than 80% of all our costs are labor-related. Without information about salaries, involvement in leading the organization simply can’t go too far.

“Collective intelligence was much more predictive in terms of succeeding in complex tasks than average individual intelligence or maximal individual intelligence.”

Anita Woolley

Another reason is fairness. The rule I’ve followed for years, whenever discussing salaries, is that I want them to be fair. Fair for an employee but more importantly fair for the whole team, group, organization. The problem is that it was me who was deciding what’s fair and what’s not.

Individual intelligence won’t beat collective intelligence on that account. I wanted to get more people involved in the process as this would let us make better, fairer decisions.

Transparency-OpenSalariesLunarLogic

Finally, I realized one thing when looking for stories of companies that either had open salaries from the beginning or changed to such model at some point. Transparent salaries, once in place aren’t much of a problem.

What happens to be problematic is the moment of the change. This, however, may be prepared for and managed so that the negative impact is not that strong.

There is one more thing that is a consequence of arguments pointed above. I do assume that a single person acting in best faith won’t be as fair as a group can be. This means that despite the fact that I chased fairness I must assume our salary list unfair to some degree.

The outcome of this is that sharing information about salaries alone would trigger frustration. Some people would consider some salaries unfair and, unless they could do something about the situation, the only reasonable outcome would be frustration. This means that it’s not only about information but also about control.

Transparent salaries must go along with a mechanism that allows everyone to influence how salaries are set. As Don Reinertsen points: it is information and authority.

That is, in short, why we decided to make salaries transparent and at the same time introduce a mechanism that allows everyone to influence what everyone’s salary will be in the future.

pg_morphSome time ago I started experimenting with PostgreSQL features to allow creating foreign keys for ActiveRecord polymorphic relations. Since PostgreSQL is a pretty powerful tool, I ended up creating a pg_morph gem which made it much easier to achieve. How it works I already described in my previous post, but if you don’t want to go back in the past, I’ll give a short refresher.

Pg_morph takes advantage of inheritance and partitioning features of PostgreSQL. For a main table which contains records associated with different tables it creates partitions in which proper records may be stored based on their association type. All those partitions inherit from main table, so by running searches on the main table you can retrieve data from its partitions. It has one additional advantage, which is searching by only one type of association requires looking into only one partition, the others are omitted, so queries may be notably faster.

From the ActiveRecord’s point of view nothing has changed and this was also a goal for a new version of pg_morph, which took magic even further beyond ActiveRecord.

So what has changed?

The initial version had one caveat, but a serious one. Because all inserts to the main table had to be redirected to partitions, there was nothing created in the main table and RETURNING id statement at the end of each INSERT wasn’t returning what every Ruby on Rails programmer is used to – newly created records had id equal nil. Not nice. To bypass this behavior, PostgreSQL was allowed to create records in the main table, and then, after insert, the duplicated record was removed.

To avoid such ugly solution I added new layer on top of all tables, which is the view of the main table. Having that, there is no need to overload database by creating redundant rows and deleting them immediately after. Thanks to that, additional operations which may cause some problems, are avoided. The main table remains empty so whole structure of partitions inheriting from main the table is build in accordance with PostgreSQL specification. And what’s most important is that ActiveRecord now doesn’t have any problems with missing ids of newly created records.

‘Whoa, wait!’ you can exclaim. ‘How can AR know that now there is a view which has to be used instead of a good old table. Does it mean that I should change all my associations to use new relation?’ No, don’t worry, pg_morph takes care of it. It creates view in place of the main table and renames main table once again taking care of possible naming conflicts.

Give me some code!

Let’s take some example data model:

So after running:

you will have a view named images which knows exactly which tables to ask for the records. And what’s most important, both partitions – for users images and items images – will have foreign keys.

Also removing this whole additional structure is as easy as adding that line to your migrations:

It doesn’t matter if you want to remove all partitions or only one particular. It will check if there are other partitions of the main table (in this example case images) which still have to be supported, and in the case that there are no more of them, the view is removed and the main table recovers its previous name. Everything starts to work like nothing has even happened.

Of course there is still room for improvements, so if you see some ways of making it better or spot any problems, simply share. Or even better – fork, and pull request!

Internship

Let me start with the remark that I hate the title of this post. The product ownership that is typically understood doesn’t exactly explain what role we are thinking of, yet I couldn’t come up with a better name. Anyway, bear with me, I’ll explain everything.

We are looking for an intern to do a little bit of work around our products and learn like hell for the rest of the time. The person in that role will be involved in shaping development of one of our products. It means a little bit of what’s typically covered by Product Owners and a lot of what is happening within the Lean Startup movement.

It’s not going to be about prioritizing features or even choosing the ones that will get built. It’s about figuring out what the product going to be. There’s nothing written in the stone. Experimenting is the keyword here.

If you are passionate about software but don’t really feel like development is your thing, then this may be a perfect choice for you.

We don’t expect you have an extensive knowledge about the stuff mentioned above. We do expect that you learn like there’s no tomorrow. This is the point of the internship – to find someone who is capable of building expertise in such a role.

Here are some questions that you may want to ask.

Q: What are the requirements?

A: Besides of learning capabilities and passion, English fluency is the only requirement. A lot of sources you’d use are available only in English.

Q: What will I be doing during the internship?

A: Read. A lot. And then some more. In fact, that’s what you will likely do for majority of your time here. You will use what you learned working on one of our products. You will design and run product experiments. You will get out of the building. You will get out of your comfort zone. You will turn an idea of a product into a real thing.

Q: Wait, what? That sounds serious. Is that the role for an intern? My ideas would be ignored by more senior people, I would guess.

A: The whole idea of the internship is bringing specific knowledge to the team and you are going to be the very person who adds that to the mix. Besides we don’t really have a hierarchy at Lunar so no one looks down on interns.

Q: There is someone doing this thing at the company, so they’d mentor me, right?

A: No, not really. That’s the whole point of “learns like hell” part of the requirements. For some, this thing is super fun for others, a struggle. We are looking for the former.

Q: This whole thing sounds crazy but I’m willing to give a shot. How long is it going to last?

A: A half of a year sounds like a good plan but it’s not written in the stone. If you have other ideas let us know.

Q: Whoa, that’s a lot of time. Do I get paid?

A: Yup. This is not a plot to get cheap labor, but rather, to run an organizational experiment. We are serious about experimenting.

Q: So what happens after the internship?

A: If it works out well we will most likely change it into a permanent role. Oh, and if it works out well, we will also have a perfect candidate for the role. You. That would be quite a nice coincidence, wouldn’t it?

Q: How is the hiring process is going to look?

A: First, we want to have a meaningful chat about the ideas of Lean Startup. I mean, we really want you to first read the book and, only then, reach out to us. Other Lean Startup sources can help too but, come on, we already ask you to read the book up front. The goal of that part is for you to figure out whether that’s something you’d like to do and for us to get a hint how your mind is wired and how you learn.

Then, we’d run a demo day with a few people who impressed us most. A demo day is a rather unstructured day spent on site with us where we try to figure out whether a new person is a good fit for us. You may expect some hands on work, a little bit of more formal, and a lot of less formal conversations. It seems that you can expect quite some fun too.

Q: One thing still bothers me. You want to invest so much time to the internship instead of hiring an expert. Why?

A: It’s an experiment. That’s one. Besides, the kind of expertise we look for is, unfortunately, really rare. That’s two. Finally, we care a lot about organizational culture and cultural fit that generally makes hiring a challenge for us. This means that it’s not only about skills but also about how you fit the company. That’s three.

Besides we’ve been through a number of cases of people growing at Lunar and most definitely wouldn’t mind another one.

Q: How to apply?

A: Once you”re ready to talk about Lean Startup write an email to Pawel.

Q: I have another question.

A: Send Pawel an email.

UPDATE: We have finished hiring process for the internships. Thank you for participation.

blink-terminalI use many different command-line tools on a daily basis, e.g. git to track changes in projects, autojump to navigate faster between directories, brew to install software and so on. Some time ago I started watching my workflow. What I noticed is that some commands appeared more frequently than the others and they contained more typos. I decided to simplify usage of tools to have less problems with typing all options properly. Here is the short tutorial how to do that easily.

Motivation

In our internal server infrastructure at Lunar we deploy demo and production applications to different containers on different servers.

Let”s assume that to access container with deployed application we need to execute one of the commands below:

ssh user@app.demo.domain # demo
ssh user@app.domain # production

After some time I”ve noticed that I was repeating most of these steps:

  • Make an SSH connection (every single time)
  • Go into application”s directory
  • Run rails console
  • Run tail on application”s logs
  • etc…

Gems for command-line apps

Writing applications with manual arguments parsing didn”t appeal to me so I”ve done research on existing gems. One of first results was Ruby Gems for Command-Line Apps page written by David Bryant Copeland. I browsed through the list and decided to give GLI gem a try.

Design

Let”s design a tool that solves or simplifies executing the steps mentioned earlier. Let”s call it <span class="text">blink</span>.

Usage of <span class="text">blink</span> may look like this:

blink shell app environment # connect and go to app"s directory
blink rails console app environment # connect and start rails console
blink logs app environment # connect and tail app"s logs

Implementation

You can follow the steps below or clone the blink-demo repository.

We need to install GLI first:

gem install gli

Then we create a scaffold project:

gli init blink shell rails logs

We focus only on <span class="text">bin</span> and <span class="text">lib</span> directories and following files in the structure:

.
├── bin
│   └── blink
└── lib
├── blink.rb
└── commands
├── base.rb
├── logs.rb
├── rails
│   └── console.rb
└── shell.rb

We create missing files and directories:

mkdir lib/commands
touch lib/commands/base.rb
touch lib/commands/logs.rb
mkdir lib/commands/rails
touch lib/commands/rails/console.rb
touch lib/commands/shell.rb

bin/blink

lib/blink.rb

lib/commands/base.rb

lib/commands/logs.rb

lib/commands/rails/console.rb

lib/commands/shell.rb

Further improvements

Our application is ready to use but we can still:

  • install application (e.g. add <span class="text">bin</span> directory to <span class="text">PATH</span> environment variable)
  • add more commands
  • use configuration files and remove environment-dependent code

Resources

GLI source code (Github)

GLI: Make Awesome Command-Line Applications the Easy Way

GLI: A code walkthrough

ssh -t

Fat models

“Thin controller, fat model” – noooooo!!!

We probably all agree that “Thin controller, fat model” was a misconception. When “thin controller” sounds good, “fat model” is just a pain in the…let’s say – lower back ;)

Last week Ania Ślimak was talking about putting model on a diet. She presented a nice approach of reducing fat from models using validation factory. This solution is useful in many cases, but may not solve all the problems, so I wanted to continue this topic.

In the validation factory approach, validations are still stuck to the model and this responsibility is not fully extracted.

Validation rules and the model schema are like to change during their lifetime.
Every time you add/remove a rule or field, you may make existing records invalid in terms of the validation rules connected to the model. You have to maintain their validity, which is often unnecessary.

Validations are often contextual.
Sticking them with models may make using them in other places (e.g. another action or admin panel) is much harder and ends with adding plenty of ifs and unnecessary attributes describing the context.

The same issue happens with other stuff like sending emails in callbacks, which are often (unnecessarily) the model’s responsibilities.

The solution

Let’s not only move validations to the service object, but make model fully isolated from them and keep the data representation as its only job.

Validation service defines validations’ rules, takes a record as an argument and delegates it to the record. Some general logic is extracted to BaseValidator to be usable by other validation services:

And the happy model doesn’t know anything about validations:

But, where to actually call the validations? Model? – we said ‘no’ to that already. Controller? – well… we don’t need to put more responsibilities than handling the request and responding there.

So, let’s introduce another service object called Handler (Creator would a good name too), which sticks everything together:

What’s about the controller? It’s thin and happy as well:

Additionally, we could extract parameter sanitization logic to a service object and make controller even more clean.

Outcome

This way we have code:
* with reduced model responsibilities
* a way easier to maintain
* easy to (unit) test
* more OOP & closer to Single Responsibility Principle.

lunar_welcome

This is my story. The story of a green smiley alien from planet Italy that decided to follow his dream and never listened to the naysayers.


One year ago I was in Chile, experiencing the most amazing adventure of my entire life. Most importantly, there I discovered what I wanted to do in life: web development.

As a matter of fact, during my university exchange, I was lucky enough to meet a bunch of people that, at first made me fall in love with web technologies, then set my enthusiasm on fire and, finally, showed me the bleeding edge stuff down the path.

That’s why approaching the end of my South American studies I felt lost: was my dream over?

In that moment I discovered something great. In fact, there is a place on planet Earth that a group of people is using as a headquarter for their space web missions. And not only they are doing the coolest things, they also have great fun in the process.
That place is called Lunar Logic.

lunar_logic_observatory

Back then I had a crazy idea, I could have revolutionized my plans and joined those crazy and foolish space travelers for an internship. That way I would have had a whole group of astronauts with whom grow as a developer while writing my master thesis.

I didn’t even have the time to think rationally about it that I was already sold on the idea. Problem was, Lunar didn’t have any position open. Furthermore, they had recently closed the summer internship program.

There was only one way to convince them: I just needed to make the impossible possible.

For a few months I had been researching and studying every possible piece of information I could dig about them. Armed with that knowledge I tailored a perfect CV and a Polish-ed cover letter.

It was a one shot one kill chance, so I bought a flight ticket. Then I sent them my presentation letter, saying that I was going to be in Poland and I would have loved to meet face to face to discuss my application.

The day I got their ok I was already traveling through Poland. It made me super happy but I knew that I would have needed much more than that before celebrating. In fact, I had to face the Demo Day, a day long interview with all the crew.

Our contact happened on August 27th. When they opened the gate of the rocket to let me in, the feeling was strange; I already knew so many things about them and their shuttle, that it all seemed like a meeting with old buddies.

The moment I got into the place I spotted the Happiness Chart, the whiteboard where every Lunar person tracks his or her mood daily using a colored drawing: red for bad, blue for so-so and green for happy.

Happiness Chart

Walking through the hallway the situation got more and more crowded. So much that, when we finally got to the kitchen, the room was literally packed.
What surprised me is that I was the one asking about technical stuff, all of them were more interested about my life, hobbies and experiences.

Suddenly, all the people started gathering in the big room I saw near the entrance. In fact, it was Lean Coffee time: Lunar Logic weekly meeting.

I was stunned by the biggest room in Lunar: it felt relaxing with its two sofas and lots of fluffly bean bags, amusing with its foosball and darts, stimulating with its squared black Ikea library full of books.

Back to Lean Coffee. On the whiteboard the astronauts were writing down, one by one, the topics they wanted to speak about altogether. Then, after a brief vote they discussed every point in the established order.

I was shocked by how much every idea was listened and encouraged. That felt more surprising because every new voice I listened to was supporting a diverse opinion from the previous one. Still the discussion was balanced.

lunar_sofa_room

After the meeting I encountered the captain of the (space)ship and took a seat with him on one of those comfy sofas.
What followed was the most intriguing two-hours interview I have ever had. And that wasn’t because he was drinking a beer in the meanwhile, or maybe it was?

Jokes aside, that conversation turned out to be really challenging. In fact, we both were trying to understand if we could have been a good fit for the job.
That was the moment I decided to drop the bomb. It was a dangerous move but rule number one during interviews is to be honest, so I complied.
Thing is, my situation was fairly complicated because I only had a few months free before coming back to Italy to attend a few more courses and graduate.

“It’s going to be something uncommon anyway so we can bend some details to have you aboard”, that answer ended our chat.
This whole conversation made me super inspired and on the emotional level I got the answer I needed: Lunar is about people, about the group.

At that point the clock on the wall struck 14:00 and I was left in the hands of the nerds. At first, I had to pair program some new functionality. Secondly, we talked about my background as a developer.
I have to be honest, that was the part of the interview where I sucked the most. But it’s just the way I am: ace the impossible things and slip on the easiest ones.

I was 6 hours into the Demo Day but the most important test still had to come. In fact, with three more developers I had to prove my reflexes and physical coordination.
It was Foosball time and it wasn’t ever a matter of being bad. I got completely owned. At some points I couldn’t even see the ball. I guess that their no gravity trainings in deep space made the difference.

The moment I got out of Lunar Logic’s base I only had one thing in mind: I wanted to become a proud astronaut.

I’ll never forget a few days later when, while seated on a bed in the worst hostel ever, I’ve read the Lunar email with their positive answer.

lunar_email

My dream was on. Again.

Now I just had to convince my university and a professor to be a supervisor of mine on a project in Poland that didn’t even exist.
But that’s another story.

If you liked the post ping me on twitter @riccardoodone.
If you hated it you can go drink your Hatorade somewhere else!


This post is dedicated to all my Chilean friends and professors that supported me for a full year and gave me the inspiration and the enthusiasm to undertake this path through web development; without you I would not have a dream to fulfill.

Also, I’ll be always grateful to all the Lunar folks who gave my dream a shot and welcomed me into the family.
I feel proud of being a member of this awesome crew.

blink-terminalI use many different command-line tools on a daily basis, e.g. git to track changes in projects, autojump to navigate faster between directories, brew to install software and so on. Some time ago I started watching my workflow. What I noticed is that some commands appeared more frequently than the others and they contained more typos. I decided to simplify usage of tools to have less problems with typing all options properly. Here is the short tutorial how to do that easily.

Motivation

In our internal server infrastructure at Lunar we deploy demo and production applications to different containers on different servers.

Let’s assume that to access container with deployed application we need to execute one of the commands below:

ssh user@app.demo.domain # demo
ssh user@app.domain # production

After some time I’ve noticed that I was repeating most of these steps:

  • Make an SSH connection (every single time)
  • Go into application’s directory
  • Run rails console
  • Run tail on application’s logs
  • etc…

Gems for command-line apps

Writing applications with manual arguments parsing didn’t appeal to me so I’ve done research on existing gems. One of first results was Ruby Gems for Command-Line Apps page written by David Bryant Copeland. I browsed through the list and decided to give GLI gem a try.

Design

Let’s design a tool that solves or simplifies executing the steps mentioned earlier. Let’s call it <span class="text">blink</span>.

Usage of <span class="text">blink</span> may look like this:

blink shell app environment # connect and go to app's directory
blink rails console app environment # connect and start rails console
blink logs app environment # connect and tail app's logs

Implementation

You can follow the steps below or clone the blink-demo repository.

We need to install GLI first:

gem install gli

Then we create a scaffold project:

gli init blink shell rails logs

We focus only on <span class="text">bin</span> and <span class="text">lib</span> directories and following files in the structure:

.
├── bin
│   └── blink
└── lib
├── blink.rb
└── commands
├── base.rb
├── logs.rb
├── rails
│   └── console.rb
└── shell.rb

We create missing files and directories:

mkdir lib/commands
touch lib/commands/base.rb
touch lib/commands/logs.rb
mkdir lib/commands/rails
touch lib/commands/rails/console.rb
touch lib/commands/shell.rb

bin/blink

lib/blink.rb

lib/commands/base.rb

lib/commands/logs.rb

lib/commands/rails/console.rb

lib/commands/shell.rb

Further improvements

Our application is ready to use but we can still:

  • install application (e.g. add <span class="text">bin</span> directory to <span class="text">PATH</span> environment variable)
  • add more commands
  • use configuration files and remove environment-dependent code

Resources

GLI source code (Github)

GLI: Make Awesome Command-Line Applications the Easy Way

GLI: A code walkthrough

ssh -t

lunar_welcome

This is my story. The story of a green smiley alien from planet Italy that decided to follow his dream and never listened to the naysayers.


One year ago I was in Chile, experiencing the most amazing adventure of my entire life. Most importantly, there I discovered what I wanted to do in life: web development.

As a matter of fact, during my university exchange, I was lucky enough to meet a bunch of people that, at first made me fall in love with web technologies, then set my enthusiasm on fire and, finally, showed me the bleeding edge stuff down the path.

That’s why approaching the end of my South American studies I felt lost: was my dream over?

In that moment I discovered something great. In fact, there is a place on planet Earth that a group of people is using as a headquarter for their space web missions. And not only they are doing the coolest things, they also have great fun in the process.
That place is called Lunar Logic.

lunar_logic_observatory

Back then I had a crazy idea, I could have revolutionized my plans and joined those crazy and foolish space travelers for an internship. That way I would have had a whole group of astronauts with whom grow as a developer while writing my master thesis.

I didn’t even have the time to think rationally about it that I was already sold on the idea. Problem was, Lunar didn’t have any position open. Furthermore, they had recently closed the summer internship program.

There was only one way to convince them: I just needed to make the impossible possible.

For a few months I had been researching and studying every possible piece of information I could dig about them. Armed with that knowledge I tailored a perfect CV and a Polish-ed cover letter.

It was a one shot one kill chance, so I bought a flight ticket. Then I sent them my presentation letter, saying that I was going to be in Poland and I would have loved to meet face to face to discuss my application.

The day I got their ok I was already traveling through Poland. It made me super happy but I knew that I would have needed much more than that before celebrating. In fact, I had to face the Demo Day, a day long interview with all the crew.

Our contact happened on August 27th. When they opened the gate of the rocket to let me in, the feeling was strange; I already knew so many things about them and their shuttle, that it all seemed like a meeting with old buddies.

The moment I got into the place I spotted the Happiness Chart, the whiteboard where every Lunar person tracks his or her mood daily using a colored drawing: red for bad, blue for so-so and green for happy.

Happiness Chart

Walking through the hallway the situation got more and more crowded. So much that, when we finally got to the kitchen, the room was literally packed.
What surprised me is that I was the one asking about technical stuff, all of them were more interested about my life, hobbies and experiences.

Suddenly, all the people started gathering in the big room I saw near the entrance. In fact, it was Lean Coffee time: Lunar Logic weekly meeting.

I was stunned by the biggest room in Lunar: it felt relaxing with its two sofas and lots of fluffly bean bags, amusing with its foosball and darts, stimulating with its squared black Ikea library full of books.

Back to Lean Coffee. On the whiteboard the astronauts were writing down, one by one, the topics they wanted to speak about altogether. Then, after a brief vote they discussed every point in the established order.

I was shocked by how much every idea was listened and encouraged. That felt more surprising because every new voice I listened to was supporting a diverse opinion from the previous one. Still the discussion was balanced.

lunar_sofa_room

After the meeting I encountered the captain of the (space)ship and took a seat with him on one of those comfy sofas.
What followed was the most intriguing two-hours interview I have ever had. And that wasn’t because he was drinking a beer in the meanwhile, or maybe it was?

Jokes aside, that conversation turned out to be really challenging. In fact, we both were trying to understand if we could have been a good fit for the job.
That was the moment I decided to drop the bomb. It was a dangerous move but rule number one during interviews is to be honest, so I complied.
Thing is, my situation was fairly complicated because I only had a few months free before coming back to Italy to attend a few more courses and graduate.

“It’s going to be something uncommon anyway so we can bend some details to have you aboard”, that answer ended our chat.
This whole conversation made me super inspired and on the emotional level I got the answer I needed: Lunar is about people, about the group.

At that point the clock on the wall struck 14:00 and I was left in the hands of the nerds. At first, I had to pair program some new functionality. Secondly, we talked about my background as a developer.
I have to be honest, that was the part of the interview where I sucked the most. But it’s just the way I am: ace the impossible things and slip on the easiest ones.

I was 6 hours into the Demo Day but the most important test still had to come. In fact, with three more developers I had to prove my reflexes and physical coordination.
It was Foosball time and it wasn’t ever a matter of being bad. I got completely owned. At some points I couldn’t even see the ball. I guess that their no gravity trainings in deep space made the difference.

The moment I got out of Lunar Logic’s base I only had one thing in mind: I wanted to become a proud astronaut.

I’ll never forget a few days later when, while seated on a bed in the worst hostel ever, I’ve read the Lunar email with their positive answer.

lunar_email

My dream was on. Again.

Now I just had to convince my university and a professor to be a supervisor of mine on a project in Poland that didn’t even exist.
But that’s another story.

If you liked the post ping me on twitter @riccardoodone.
If you hated it you can go drink your Hatorade somewhere else!


This post is dedicated to all my Chilean friends and professors that supported me for a full year and gave me the inspiration and the enthusiasm to undertake this path through web development; without you I would not have a dream to fulfill.

Also, I’ll be always grateful to all the Lunar folks who gave my dream a shot and welcomed me into the family.
I feel proud of being a member of this awesome crew.

Visualizing your business ideas in easy and simple ways has become much easier as many new solutions are appearing on the wave of Lean Startup’s growing popularity. There are differences between the various tools: themes, form, the level of detail and elements on which they focus attention. Learn to use them properly, you’ll become better at analyzing your ideas and product concepts, and at communicating the most important issues facing your new business.

Canvases are a great alternative to traditional business plans, which usually require a lot of documentation and don’t stress the most important information, reducing the clarity and readability of your business model for potential investors, partners and even employees. And, most importantly, they don’t allow entrepreneurs to work with their business ideas in flexible ways.

Business Model Canvas

The most popular new tool for visualising a business plan is the Business Model Canvas, which visually describes the key elements of a new company, service, or product. It focuses on the value which the business will offer customers, with the Value Proposition placed at the center of the canvas. Key partners and market/customer analysis are also important. It is possible to fill out the canvas using online tools (e.g. via the Strategyzer app or Canvanizer page). A useful supplement to that model is the Value Proposition Canvas, which lets you consider and complete information that is helpful for completing  your Business Model Canvas. You should go one to complete the Business Model Canvas if, in your business model, partners and resources are important. In the other canvas models, those parts are less prominent.

Business Model Canvas

Let’s say our business is an app which allows people to design pets clothes, which lets you put your pet in a costume (like that dog in a spider costume).

Lean Canvas

Less popular, but appreciated for its simplicity, is the Lean Canvas. This canvas is focused on users’ problems, which your product or service will solve and the concrete solutions offered. Lean Canvas has two parts: product and market. In contrast to the BMC, it assumes that the value offered to users will be unique, competitive and new. This canvas also adds a very important component: Key Metrics. These are the numbers you will track which will help you to recognize your product’s success. Competitive advantage is the most significant characteristic of the Lean Canvas; In the canvas it’s described as “Unfair Advantage” which explains the product’s competitive advantage (i.e. it is hard to copy or buy). You should choose the LC model if you’re focusing on users’ problems, solutions and measuring the progress of your business.

Lean Canvas

Using that canvas I realized that we need to change the concept a bit. I focused on problems and solutions which we offer (adding clothes created by designers, etc). I also need to change value proposition. I have one position – “Unfair Advantage” which I need to rethink.

Petri Model Canvas

Personally, I usually use either Lean Canvas or Petri’s canvas (FTE canvas), which is its extension. The FTE canvas model places more focus on customers with additional information about the ones that should be paid users. The flow of canvas creation is defined strictly and starts from a “Paid Users” prospective.

Unique to the FTE canvas is the distinction of users who buy your product on an early stage (Early Buyers). This group is often forgotten during a discussion about people who founded their own business. Co-founders are focused on target groups of the ready product, forgetting the natural distribution curve or the product life curve. They forget often that the product needs to first be adopted by users called by Rogers “innovators”, whose characteristics are usually different than those on the mass market. Or, conversely, they treat early adopters as the target group. Forgetting that early buyers are just a small part of all potential clients and achieving market success is connected with reaching a larger number of customers. The Rogers Adoption Curve shows it in a great way, and the FTE canvas distinguishes those groups clearly, too.

This canvas stresses the value of competitor analysis. The Existing Alternatives position focuses the attention of entrepreneurs not only at direct, active competitors, but also the other products and services which already solve the same users’ needs. Startups often forgot about competitors, which is a point often stressed by investors. There is a special canvas dedicated to competition analysis in a lean way.

The important element, which differentiates FTE canvas from Lean Canvas and Business Model Canvas, is the canvas down line, usually responsible for the financial side of the business. Whereas, in the preceding models, the last line included Cost Structure and Revenue Stream, while in the FTE canvas there are: Costs, Customer Acquisition Cost and Customer Lifetime Value (called also LTV or CLV).

As you can see, the structure of the FTE canvas is much more practical and measurable. It allows easy and efficient verification of the business hypothesis. You should choose it if monetisation and the financial side of your business is key for your business (so almost always?).

FTE canvas

There are many questions that I don’t know the answer to in that canvas. I change the KPI’s based on “Paying customer” and “Early buyers” positions. I need to describe, more precisely, the marketing & sales costs.

You need to remember that canvases are only a map of the issues that every startup should consider. There is value in experimenting with these models, because there is no canvas which is always the best – no matter the product or context. You can use more than one canvas in the very beginning; it forces you to look at a product from different perspectives. You can also change a canvas while working with a product (for example from LC to FTE, if, at first, you are working on a global conception and then you’d like to focus more on the financial side). But I don’t recommend completing and actualising a couple of canvases at the same time, because it won’t give you any additional benefits and will take up your time, which should be invested in developing the product. Regardless of which canvas you choose, the key is: if you can’t fill out any canvas in a logical way, you should rethink your business model or even just think again, deeply, about its formula.

We are a web software shop. We frequently work for startups, including those who are just starting. Inevitably we get asked to estimate batches of work.

I never have a quick and easy answer when it comes to estimations.

On one hand I perfectly understand the rationale behind requesting estimates. If I were starting my own project or even a company, I would like to know how much I need to bet in order to turn it into a business or at least validate the hypothesis that it can be turned into one in the first place. In fact, the scale of that bet would influence my decision.

Even if the information in terms of the expected cost wouldn’t make me turn away from the idea altogether it may influence decisions about who will build it for me and how I will structure the development process. It’s not that I think the expected cost should be the determining factor in terms of making the ultimate decision when it comes to building software. Pretty much the opposite. Though, it has to be taken into account.

In either case, an estimate provides some information which is often pretty important in deciding on a course of action.

What’s the problem with estimates then?

Estimates are most often interpreted as commitments, which they are not. Not only does it influence how people act in a situation when it turns out they were not accurate, it also changes the whole discussion about estimation. A thing I would frequently hear is that since a project is well defined providing a precise estimate in such a case should be easy.

It’s not for a number of reasons.

Precise Estimates

One thing is that the human brain is pretty much incapable of estimating in abstract metrics such as space and time. Not only that. It also doesn’t get better with simply more experience at estimation. Research conducted by Roger Buehler et al showed that even for tasks we are familiar with, and even when asked for an estimate that we’re 99% sure of, we’d get that estimate wrong more often than not.

planning fallacy

Let me rephrase, even when we play it super-safe the quality of our estimates is mediocre at best.

In software development, in the majority of cases, we simply can’t play it super-safe. A new project or a product is always new so familiarity with the subject matter is always limited and clients very, very rarely are willing to pay for confidence levels as high as 80%, let alone 99%.

The bottom line is that even for a fairly well known domain we can’t expect precise estimates.

Scope of Work

That’s not all though. Another argument in this discussion is what gets estimated. First we estimate a batch of work and then we never, never ever, in the end build exactly that. I’m yet to see a project where the scope doesn’t change over the course of building it.

There’s a number of good reasons for that.

  • As the project progresses and a client sees the parts that are already working they have more insight on what should and what should not be there.
  • As time passes clients learn more about their business environment and that results in changes in how they think about the product.
  • We are only human and simply forget stuff.
  • We have to take care of all the rework that happens once a decision maker changes their mind after something has already been built.

The worst thing that could happen is when a death cycle starts: we keep adding new features to a product, which makes it late and this means we have to add even more features to catch up with the market. This in turn makes the product even later, which forces us to add even more stuff… Now, go figure out what happens with reliability of estimates in this scenario.

The tricky part is that it’s much easier for us to add new stuff to the plate than to remove existing things. Most commonly we end up building more stuff than what was part of the initial scope. How does that work for estimates?

Knowledge Discovery

Another challenge is related to the learning process that is an inherent part of every project. We start a project with a bunch of assumptions. These assumptions will frequently get invalidated which, more often than not, means more work than expected.

A common answer to that is to add more details to specifications. That doesn’t really work. It would if we were talking about the known unknowns. In other words, we would know exactly what questions we should ask. The big risks are in the unknown unknowns – ones that we can’t predict or ask about up front. We will become aware of them eventually and they would affect the scope of work but the trigger is having parts of the app built and getting feedback from a client.

There’s another reason why going into more detail with specification is most often a bad idea. It doesn’t come for free. It means spending time and money on scoping out all the details and delays the start of actual work. The latter is frequently much more costly because of the Cost of Delay.

In fact, Douglas Hubbard argues that adding more and more details makes estimators more confident while the quality of their estimates gets worse.

When you look at a project from the perspective of a knowledge discovery process, the moment where you naturally know the least is before it commences. It is at this exact point where we are expected to prepare estimates.

Collaboration

Finally, there’s something that is almost never taken into account, despite the fact that this factor itself may be responsible for increasing the effort needed to build the same batch of work by a factor of two or more.

This magic factor is the quality of collaboration. It affects a team working on a project on many levels.

The first thing is the availability of the client. If we get all the information we need in a timely manner we go in with much fewer non-validated assumptions when we build specific features. This highly reduces the amount of tweaks needed to satisfy the needs of a client.

Then we have feedback. The faster we get feedback about the stuff we built the faster we can improve it and thus we shorten the cycle time of finalizing features and limit the number of tasks that are in progress. This results in less multitasking and a much better efficiency of work.

If that wasn’t enough we also have more psychological factors. If collaboration on an interpersonal level works well people tend to be much happier. Happy people also results in more efficient work. Not to mention that it also means that a team is much more likely to go the extra mile.

It all stacks up to a point where the quality of collaboration is typically the biggest leverage one can use to limit effort and the cost of building a project. At the same time we never know how this part will go until we start working together. On one hand it is a major factor that influences any estimate, on the other it can hardly be known at the time when estimates are prepared.

The Bottom Line

Let’s go back to what I started with. Estimates often have a crucial role to play in the decision making process. We can’t get them right though. Are we doomed then?

Not really. There are two basic lessons to learn here. One is that the reasons of commonly low quality estimates are not specific for us – these are general observations. That means that when someone provides an estimation with precision, they’re either unaware of all the uncertainty involved or they’re one of these “let’s throw an appealing number at a client and then we’ll figure something out” guys.

If the latter is true I’d challenge the idea that it’s wise to work with such a partner. It’s not going to turn into a trust relationship.

If the former is the case, there’s at least some hope. With some dose of luck the outcome of such a project can be OK and a reasonable estimate can make a decision about kicking the project off easier. There’s a downside too. Such an approach is reckless. It means a lack of awareness of risks and as a result no risk management. In the case where a budget is not sufficient to cover every single thing, which is a common situation, tradeoffs have to be made when we have the least options available.

I don’t think I need to mention that it is much better to actively manage risks, including those related to the scope of work, from the very beginning, as this universally yields better outcomes. For that a level of awareness of the challenges facing estimation is a hard prerequisite.

Fixed Budget

After sharing all that, one could get an impression that we wouldn’t consider working under fixed budget constraints. After all, we say that the estimates we provide have a high level of uncertainty which translates to a lack of hard commitment that we would finish a defined batch of work within a specified budget.

Such an impression would be wrong. We have nothing against working under a fixed budget. One thing that both parties need to be aware of is that in this scenario we don’t treat the scope as fixed.

There are consequences of that fact for both parties. One thing is that we need to work very closely with a client to define what the crucial parts of the app are and in what order features should be developed.

It doesn’t mean that we need to have all the decisions up front. In fact, my common advice is to define a set of features that the application must include in every single, even the most crazy scenario that can happen. I call it a Minimal Indispensable Feature Set. It definitely doesn’t mean that such a set of features would constitute an MVP. It shouldn’t. It shouldn’t be viable for release. At the same time it would be a pretty safe choice as these features are assumed to be ultimately part of the app in all possible cases.

Further decisions can be delayed till we get feedback from building this set of features, which provides a lot of value:

  • We will be further down the knowledge discovery process so we will know more about the work in general.
  • We will get some feedback from a client about the early releases.
  • We would uncover some of our assumptions – unknown unknowns – and move them to a more predictable domain, the one where we at least know what questions we should ask.
  • Based on the data points from the early stage of the project we will be able to tell much more and with much better precision what our pace of development is and thus vastly improve our estimates.

From that point on we can continue making decisions about the scope in a similar manner, deciding on fairly small chunks of work that we know we will need, and continuously iterate on that while learning more about the work we do. It also means that our understanding of what would fit the fixed budget will be improving continuously and thus we will inform the tradeoff decisions. We will suck the most out of the available budget.

By the way, we can use exactly the same approach in the case of where we have continuous funding for a project. Although, I rarely see enough determination and discipline on the side of the client in such scenarios to go for that.

Meaning of Estimates

My message here is twofold. The less important part is explaining why we do what we do with the estimates and why I would always start answering a question about estimates with “It’s complicated…”

The important part is that in order to have a meaningful discussion about the estimates we need to understand all the caveats behind the process. Otherwise, we simply go with the answer that is most comforting for us initially, which is likely also the one that would introduce a lot of troubles down the line.

It is very rarely, if ever, a sensible choice.

How many colors do you see?

Have you ever wondered how many colors a picture actually has? How many colors do you think are on the following image?

cat

Many people would say 2, but actually there are 1942.

It’s because of the fact that in order to make an image with smooth borders not all of the pixels are pure black but consist of many grayscale colors.

Existing solutions

I came across this interesting problem recently, as I needed to implement a validation of a color count for the image user uploads to our Rails application.

My first thought was that it should be trivial to do so by just using RMagick. Unfortunately, the color_histogram method returns all these grayscale colors too and I couldn’t find a way to exclude them using just RMagick. I’ve tried all possible options for image processing, trying to flatten the colors as much as possible to reduce the color count, but it seemed to be impossible to set such parameters that would work for all the possible images that people could upload.

I found some sort of a solution using a Python library – colorific, but I also wasn’t able to configure it in a way that it will work for all my sample images.

Algorithm

So I decided to change my approach completely. Using RMagick I got a color histogram, so I had all the existing colors with their occurrence count, and I was able to easily calculate the percentage of each particular color in the image.

Now I needed to decide which colors I should take into consideration and which to ignore. I didn’t want to just set an arbitrary percentage threshold. Imagine we set the threshold to 1%, so every color that doesn’t cover more than 1% is ignored. And now lets have an image 10 pixels wide and 10 pixels high consisting of 100 different colors, so each one covers 1% of the image. With such a threshold, all colors would be ignored, so the image would have 0 colors while it actually has 100 colors. Also setting the threshold too low would not work, as with other pictures it would calculate more colors than expected.

So I came across the idea to sort colors by percentage in descending order and take the first of them that sums to an arbitrary percentage – after experimenting with many samples I chose 98.1%.

But then I conducted an experiment on the image below:

colorful

As you can see, there are gradients on every letter and here these gradients actually matter.

Take a look on magnified letters “o” and “r”:

colorful-magnified

If you just take into consideration one of the green colors from “o”, it will make such a low percentage score that it will be ignored by the reduction algorithm.

I realized that I need to group similar colors before reducing. So firstly, I had to agree what does “similar” actually mean. I tried comparing two colors in different color spaces – RGB, YUV and Lab and the last turned out to be the most appropriate – it matches more accurately with how the human eye perceives color.

For performance reasons I also added a phase of color limiting if the amount of colors is so big that calculating groups would take too long – by default I take into consideration only the first 10000 colors with the highest percentage value.

Summary

So to sum up the phases go in this particular order: limiting colors, clustering and noise reduction in this particular order. The result is a Ruby hash with the main colors as keys and grouped colors as values.

You can use it to display color palette: palette After doing so much work on that I decided to make it publicly useful and extracted the code to a gem. I named it gauguin in honor of my favourite artist.

Recoloring

One of my Lunar colleagues, Hania Seweryn, came across a great idea – a feature that would take the image and it’s calculated palette and return a new image, colored only with the main colors. I loved the idea, so I implemented it in the gauguin gem.

For the above image it would be:

colorful.png.modified

gauguin

What can I use this for?

You can use palette method whenever you need to get a realistic color count of an image or want to create color palettes based on images. The recolor method can be used to reduce noise, for example before vectorising an image.

It was very useful in my original task – I needed to validate the color count of the images uploaded to our application because they were meant to be printed, and the price depended on the number of colors. Images were to be vectorised before printing, and thanks to the recolor feature I could show the user how the colors will be reduced when the image is printed.

For both retrieving palette and recoloring you can check out this demo site.

Aleksandra running bash workshopSoftware Carpentry is a non-profit organisation which runs workshops all over the world, provides open access teaching materials, and runs an instructor training program.

The main idea for these events is to help scientists to develop their programming skills so they can use them on a daily basis to automate their work.

The workshop program is usually fixed to the following areas:

  • the Unix shell (and how to automate repetitive tasks)
  • Python or R (and how to grow a program in a modular, testable way)
  • Git and GitHub (and how to track and share work efficiently)
  • SQL (and the difference between structured and unstructured data)

Attendees work on their own laptops so they have everything installed so as to be able to work on their own later on.

This weekend, thanks to Aleksandra Pawlik, we had the first such workshop in Cracow – Women in Science and Engineering. This particular event was targeted towards women who work as scientists and who could use programming skills to be more productive at their work. There are lots of events targeted only for women lately as they tend to be too shy to go for the usual events, so there is a strong community that tries to convince them that they can do anything they want.

I had a pleasure to help during the event – while Aleksandra and Paulina Lach were running the workshop – I along with Barbara Szczygieł, Iwona Grelowska and Patrycja Radaczyńska were ready to solve issues and answer any question from attendees. And one of my Lunar Logic collegues – Basia Madej – helped with organization.

Smart stickies system

The event was, as usual, two days long.

On the first day Aleksandra ran a workshop on bash and version control.

It was very important to participants as they sometimes are required to login to some server without a graphical interface so they need to know how to use command line.

Also version control can be really useful, eg. while working with a colleague on a publication.

On the second day Paulina Lach was teaching Python. Attendees could learn how to replace their usual Excel documents with more efficient Python scripts, how to plot things, how to use numpy library to compute operations on matrixes and so on.

Last, but not least, participants learnt about the power of the databases and how they can use them in their work.

We agreed on a very smart system of red (or pink ;)) and green stickies – when an attendee finished a task without problems, she put a green sticky and any time she had an issue – she used a red sticky so people could react immediately to help.

It was really great to see all these women fascinated with the topics and willing to continue learning programming, feeling that it can be really useful for them.

There were lot’s of people interested in taking part, unfortunately they ended up on the waiting list, but I believe there will be more such events in the future.

I really like that the community is growing here!

Women in Science and Engineering

<img class=”post-align-right post-image wp-image-2594 size-full” src=”/images/startrek-01_360.png” alt=”startrek-01_360” width=”300” height=”300”style=”margin-bottom: 2em” />

Imagine when you go to your first Agile Testing conference. It’s the opening keynote. What do you see?

I saw Spock. On stage. With “Welcome to the future!” written behind him. And unicorns…

Yes, the main theme for this year’s Agile Testing Days’ 2014 was all about what’s on the horizon in software testing and some of the speakers took it quite seriously. The suggestions of how tomorrow’s world of software testing will look sneaked in to almost every talk. The conference started with Lisa Crispin and Janet Gregory, dressed up in Star Trek costumes, speaking on how we should ready ourselves.

 

[slideshare id=41462735&doc=welcometothefuturev3-atd2014compressednov11-141112095118-conversion-gate02]
The opening keynote inspired me to gather together everything that I thought would prepare me for my agile testing journey. So through the whole conference I was looking for ideas, that would give me clues on how to prepare for my own future adventure.

Here’s a list that I came up with:

1. Experiment and be creative!

The only thing that you can surely say about the future is that it brings change. There is no point in being stuck with only one correct style of testing (I don’t even believe that such a thing exists). To embrace everything that the future brings, You have to be creative and sometimes do the unexpected.

The need to experiment was first highlighted by Lisa Crispin and Janet Gregory who, during the opening keynote, showed that it is the best way to face complex problems and new unexpected things is to try fresh techniques.

The talk that inspired me the most in terms of creativity was Jan Jaap Cannegieters “Flexible testing and using different sides of your brain to optimize testing”. He proved that one way of testing is not good for all projects. Sometimes you have to follow your intuition, try new things with a goal in mind, question everything you do or just play with your work. To continue this path I went to „Growing into Exploratory Testing” by Peter Walen, where I learned that sometimes the best way of testing is by not creating a big plan, but by just letting your next move be determined by the previous one.

2. Communicate

There could be a whole book written on how communication helps an agile tester. Seriously. During the conference I realised, how much value our every-day conversations add to the work that we are doing together. Even when we don’t talk about code or features and just chat about our feelings, daily experience or problems, our conversations are fruitful, because we share and reflect our needs. This is what I got from Bob Marshalls keynote “The Antimatter Principle”. He assumes that the purpose of creating software is to “meet peoples’ needs”. And by people he not only means users and customers, but also programmers, managers, testers and everyone involved in the environment where “making software” takes place. And why would we need to know ours or others needs to create good software? Because people are able to self motivate and the thing you have to do to trigger intrinsic motivation is create the space where people’s needs are met and discussed.

3. Develop Your skillslego

“Testing is learning”. These are the words from Alessandra Moreiras presentation “The mindset of an agile tester.” You have to study the application you are working with all the time. You also have to know a lot about testing techniques to choose the one that fits the most, not the one that you feel the most comfortable working with.

During this year’s ATD there were many opportunities to develop testing skills. One of my favourites was Huib Schoots‘ workshop “Practical Agile Test Strategy with Heuristics” . It was really well driven. Explanation, theory and exercises were balanced and effective. It inspired me a lot (and I’m currently using heuristics that I’ve learned there).

Another great workshop was “LEGO TDD and refactoring” by Bryan Beecham and Mike Bowler. Who would have thought that playing with lego will explain to me the idea of refactoring, TDD and (what is the most important) words written in the Agile Manifesto. I still have in mind the moment of having an idea of an awesome giant lego sculpture and realising that there is not enough blocks left to prepare a shipping container to deliver it to the destination point. And sometimes I still get myself caught up in thinking about applications as being a lego structure :)

4. Fight against what limits you, embrace your talents

I couldn’t write about this years ATD2014 without mentioning Antony Marcanos keynote “Don’t put me inside the box”, which was a true enlightenment to me. It made me aware that calling people a “human resource” is always harmful. Each of us is a unique personality and has multiple talents, but we often forget about it and don’t use what’s best in us. We allow ourselves to feel limited by our job title, desk, workspace, mindset. But, guess what? We are not the machines, we have our special features and strengths, that always add value to the work we are doing. That’s why, just like for Antony, my favourite job title is my own name and my skills are as an equal part of my everyday work as my personality. Knowing whats the best in us, fighting against things that limit us makes us a better… whoever we need to be.

5. Don’t be afraid to… fail

Sometimes we have this awesome idea that would change everything. Our mini revolution. We force it to work, we try to share it with others and … it appears that the idea wasn’t that good at all. We get frustrated and forget about it. No one shares the stories of failure, nobody publishes research that finished without a clear result. Wrong!

On the second day keynote Alexander Schwartz and Fanny Pittack shared their stories of failure. Why? Because failure is not always the end, it can be the beginning of creating something better. During their presentation they showed that frustration (ours or our team members) is always important feedback and that we learn a lot from the ideas that didn’t work out.

Another important factor is creating a failure tolerant environment, which was mentioned in Roman Pichlers “Strategy testing” keynote. Life is too short for delivering things nobody needs. Examining and analyzing ideas at an early stage is very important. Accepting the fact that some of them might fail, can help you focus on really important problems.

agiletdMy list is ready, when does the journey start?

“Your future is created by what you do today, not tomorrow.” said Richard Seidl in “5 habits to survive as a tester”. There is no starting point for the journey. My list will grow with each and every experience. There was a slide in Lisa Crispin and Janet Gregory presentation that said:

“As we continue to go where no man has gone before, can we really predict what will happen?”

I don’t know what the future will be like, but I can surely say – I’m ready for it!

guard

Tests are important

Regardless of what I work on – frontend or backend – the most important thing for me is to be sure that everything is working properly after my changes. That’s why, in my opinion, tests are essential.

In order to not have to remember about executing tests on every change, I use guard gem, which does it automatically. So I always add guard to every project.

Complicated project with lots of modules

Lets assume we have a project that consists of separate modules, each having its own test suite. Theoretically you could setup guard for every module separately but then you would have to monitor separate terminal tabs for each module to be sure that everything is green.

It’s easy to get lost in tabs, so I wanted to setup guard in the usual way, to have one terminal with the guard that will watch changes from all the modules.

I found the watchdir option, which I thought should do the job, but it didn’t work the way I expected. I touched base with the guard core team and it turned out that this option is just an optimization for really large projects in order to not kill your processor or hard drive.

New option in guard to the rescue

After discussing the issue with the guard team, I decided to implement a new option for that. The new option is called chdir. There are changes to both guard and guard-rspec, they have been merged into a master and are waiting for release.

Assuming you have your project setup like here, so you have all the usual tests in spec directory but you also have the moduleA directory with a separate test suite in moduleA/spec, you can use guard to monitor all your modules running the usual command bundle exec guard start and having Guardfile configured in this way:

[['.', ''], ['moduleA', 'moduleA/']].each do |dir, prefix|
  guard :rspec,
  chdir: dir,
  cmd: "cd #{dir} && bundle exec spring rspec" do

  watch(%r{^#{prefix}(.+)\.rb$})
  watch(%r{^#{prefix}lib/(.+)\.rb$}) { |m| "spec/lib/#{m[1]}_spec.rb" }
  watch("#{prefix}spec/spec_helper.rb") { "spec" }

  # Rails example
  watch(%r{^#{prefix}app/(.+)\.rb$}) { |m| "spec/#{m[1]}_spec.rb" }
  watch(%r{^#{prefix}(.+)\.rb$}) { |m| "spec/#{m[1]}_spec.rb" }
  watch(%r{^#{prefix}app/(.*)(\.erb|\.haml|\.slim)$}) { |m| "spec/#{m[1]}#{m[2]}_spec.rb" }
  watch(%r{^#{prefix}app/controllers/(.+)_(controller)\.rb$}) { |m| ["spec/routing/#{m[1]}_routing_spec.rb", "spec/#{m[2]}s/#{m[1]}_#{m[2]}_spec.rb", "spec/acceptance/#{m[1]}_spec.rb"] }
  watch(%r{^#{prefix}spec/support/(.+)\.rb$}) { "spec" }
  watch("#{prefix}config/routes.rb") { "spec/routing" }
  watch("#{prefix}app/controllers/application_controller.rb") { "spec/controllers" }
  watch("#{prefix}spec/rails_helper.rb") { "spec" }

  # Capybara features specs
  watch(%r{^#{prefix}app/views/(.+)/.*\.(erb|haml|slim)$}) { |m| "spec/features/#{m[1]}_spec.rb" }

  # Turnip features and steps
  watch(%r{^#{prefix}spec/acceptance/(.+)\.feature$})
  watch(%r{^#{prefix}spec/acceptance/steps/(.+)_steps\.rb$}) { |m| Dir[File.join("**/#{m[1]}.feature")][0] || 'spec/acceptance' }
end

There are 4 important things there:

  • chdir option passed to guard method
  • changes in cmdguard needs to change directory to execute your tests in moduleA
  • a prefix needs to be added to every pattern
  • everything is done in a loop for every test suite

Now you can run bundle exec guard start and check that it also monitors also tests in the moduleA directory.

I’m back from what was the most intensive event I’ve been to in a long, long time. Lean Kanban Central Europe (LKCE) has a special place in my heart. If I were to pick one from all the global communities, it would be the Lean Kanban community. LKCE is consistently exposing me to new ideas. It is a place where my network of connections systematically grows – a list of people who I have first met there is so long that I won’t even dare to try to mention them all as I would inevitably forget someone.

Last but not least, I’m the part of the program board so there’s a little bit of my input in how the events have been shaped over the years.

This year’s conference (LKCE14) was special in a way that we decided to have tracks. On one hand that meant that the program board members had a lot authority in shaping how our parts of the event looked like. On the other it meant quite a lot of work during the conference.

That’s not all. I had two presentations at LKCE14: a regular one on leadership and a pecha kucha on learning.

If that wasn’t enough I stayed for two days more for Don Reinertsen’s super-intensive product development workshop. After the whole week my life force has literally been spent.

I am a happy bunny though. I’ve had a lot of great discussions. I’ve met new people and had a chance to catch up with some old friends. Even though I had limited freedom in choosing the sessions because of my hosting duties I managed to attended a bunch of great presentations. I ran a leadership track which worked out exactly the way I wanted. I’ve learned a ton. I spent a couple of days at a workshop with a guy who knows more about product development than I will likely learn through my entire life. I had an insane amount of awesome German beer.

So, here are a few of my highlights of the event.

Martin Jensen ran a fantastic session on organizational culture. If that wasn’t enough we went even deeper discussing the topic late at night in a hotel bar and, obviously, having beers.

Organizational culture was one the themes of the whole event. There was Martin. There was, awesome as always, Kathrine Kirk. There was Marc Burgauer, who I had pleasure to host on my track. I did add my two cents worth with my presentation.

The leadership track. OK, I am mentally programmed not to brag about myself and to focus on the areas of improvement. I can’t help it though – when the track was finished I felt really proud of myself and happy. This is also an opportunity to thank Esther Derby, Liz Keogh and Marc Burgauer for being my guests and speakers on the track. You really made my day. You’re awesome.

Don Reinertsen’s workshop. I’ve been his fanboy for some time already so I had a lot of expectations. Don definitely matched them and went even further. What I will say now is that every organization that builds products should send product people to one of his workshops.

Brodzinski_PK_Illu

Pecha kuchas were, as always, one of the best moments of the event. The very constraining format of the session results in a lot of creativity and very focused messages. And Markus Andrezak produced pure magic by hosting the show. If not for him my stress level before popping up on the stage would likely have been unbearable.

One of the measures I use to track how much I liked an event is how little I slept. At LKCE14 I went below 6 hours a night. As if that wasn’t enough I didn’t have a single hour of wake time when I wasn’t doing something related to the event. It was super-intensive indeed.

I am already looking forward to Lean Kanban Central Europe 2015.

Kraków 30-XI-2014
ATD 2014 Exam
Grzester

You have a picture. You’ve got one minute to see your picture. Describe all the things you can see in the picture and explain what you think is happening.

Ok, ok let’s just stop this exam right now, one minute is not enough time to finish this task… Why? Because Agile Testing Days 2014 was an AWESOME event! and I just can’t describe it in one minute. But the image attached above (made by our <3 Gosia) still will be pretty useful, because more or less it presents what I’ll remember from this year’s edition of ATD14.

In the background of the picture I can see the ATD2560 banner and a flying ship…..

Yep, ladies and gentlemen we are in the future. Why? Because the motto of this year’s conference was “The agile movement is thriving! How does it affect the future of agile software testing”. Opening keynote prepared by Lisa Crispin and Janet Gregory was literally and figuratively related with the future, guess why:

Photo by Paul Carvalho

Star Trek anyone? The main thought of this talk was the hypothesis that ‘We can’t predict the future’. But Lisa and Janet gave us a few simple tips of how we can adapt to changes… become a shape-shifter to adapt to ever-changing needs! You can ask how can I become shape-shifter? The pattern for this is really easy… learn new skills, follow up on new technologies, constantly improve your communication skills and remember about quality of the product will still remain as the main part of our tester job. The ladies also touched on one thing which, over the last few years, I’ve been applying to my daily work: giving customers what they need, not what they ask for!  The shape-shifter skills listed before are pretty damn useful for the realization of this task. You need to communicate with the client if you want to discover what he needs, you need to understand his product, you need to present new technologies to the client and finally you need to deliver a high quality product for him set in business value reality.

The banner is placed on some kind of ancient Greek building…

So, you can ask so where are we? In the future or in Ancient Greece times. No worries we are still in the future, but David Evans at the end of second day reminds us that we should respect old, traditional testing values described as the pillars of testing. David on the stage raised and described a Testing Temple.

Photo by Ben Williams. PS. Look at the video from his talk too.

At the top of this temple we should place the product of testing: Confidence, which is supported by safety and courage. All of those things should be propped up by the stable pillars of testing such as: Stronger Evidence, Better Design, Greater Coverage, Faster Feedback. Each pillar represents a measurable, valuable quality of testing. Our confidence should be raised by getting Stronger – Better – Greater – Faster! Our temple was almost ready and all that was left was we just needed to lay the foundations. David decided to base the Testing Temple on three types of foundations: Team Foundations, Capability Foundations and Technical Foundations. At the very bottom part of this temple we should place Leadership and our Engineering Discipline.

The temple model presented by David for me seemed to be s sorting, organizing and a reminder of what testing is. During our daily work, activities it’s really easy to forgot about our testing roots, core values and sometimes you need to rebuild your Testing Temple. A really inspiring keynote presented by an exquisite speaker.

Someone is jumping out of the box….

The morning keynote presented by Antony Marcano who talked about “Don’t put me in a box” was a bomb!

How can you answer simple question such as what do you do?

Most of us describe ourselves with noun related keywords about our job, developer, tester, plumber, forester. Our nature is to put ourselves and others into ‘job title’ boxes. Valuation and evaluation based on job title is still something that still happens, same as calling people resources. Stop. Don’t do this, DON’T PUT ME IN A BOX! Because it gives the message that they can be thrown away. Don’t be a tester, coder, designer, don’t be a machine which is plugged in when there is work to do. Try become a T-Shaped person. And remember! Focus on performing and remember “(…) quality comes from people, not process.”

(…) and this guy is holding LEGO and TDD flags.

Photo by Quality Engineering

And the (my personal) award for the best workshop goes to… Bryan Beecham & Mike Bowler! Well this time I’m a little biased. I LOVE LEGO so these two gentlemen had won this competition long before ATD2014 started, but during the 2 hour workshop they proved that they deserved it. During the first part we learned how to write TDD scenarios from scratch. On my table we tried to build a LEGO house, so:

  • RED, GREEN, REFACTOR,
  • RED, GREEN, REFACTOR,
  • RED, GREEN, REFACTOR,
  • and we have a ‘fully’ functional house!

Learning TDD using LEGO very well illustrates the process of building failed tests. The workshop hosts with subsequent exercises showcased, more and more aspects of TDD. The second part of the workshop was focused on refactoring. This part of the workshop was more directed to developers. Even though I am not working much with code at Lunar Logic, Bryan and Mike somehow caught my attention. The last part of this workshop was a kind of exercise on the scope of cooperation. Together with other workshop participants we tried to build: product →  container →  truck →  crane →  port. This exercise showed us how important communication and adapting to new requirements on all levels of building a product is. Even I learned a lot about TDD and refactoring, another lesson from this workshop was the recipe of how to run an excellent workshop. Good topic, charisma and… LEGO!

Yep, the image definitely describes the future, we don’t have robots nowadays, are we?

Source: www.societeperrier.com

The last talk from Daniël Maslyn’s focuses on “Agile Testing for Future Robotics”. Daniël presented some potential paths of how Robotics can develop in the future and how Agile can shape the future of this science. Daniël asked some key questions about the future of testing. How we can adapt as agile testers to the Robotics industry? How will we test future: AI software, hardware frameworks, devices and complex scenarios for systems involving Robotics? This talk was a nice follow up to Lisa and Janet keynote from the first day, and made me realize once again during this conference how important it is to learn and absorb new technologies.

Conferences for me are more like recharging my batteries. I am looking there mostly for new inspirations and ideas, I want to learn there. After this year’s ATD I am fully recharged, but it’s time to come back from this future world and reconsider how I can prepare for what’s to come, and how I will discover myself as a …? Who knows :)

From time to time somebody asks me if I’d like to try something completely new or different. And though I’m not a very daring person, I say – yes. It was like that when I was seven and my mother asked me if I wanted to go to the music school and play some instrument, and then I devoted half of my life for it with the belief that it will last much, much longer. It was like that some time ago when I was asked by Paweł if instead of going to the programming conference, would I go to Hamburg for Lean Kanban Central Europe 2014? And I answered – guess what. And it was a good answer.

It would be impossible to cover all the thoughts I had the pleasure to listen to, since I’d have to write an entire essay about each session, but I’ll try to share at least a few of them.

first steps on new board

What Makes a Winning Team?

Photo by Nadja Schnetzler

The journey started with a keynote presented by Mary Poppendieck with an overview of commonly raised topics in the agile world. So, she was talking about what makes a winning team. About how it looks in a military model and how it should work within the team. How it is important to have awareness of the overall situation, goals, timing, but also on the constraints of each level. About how reliability stems from mindfulness. And also that we should change from being a delivery organization into a problem solving one. A nice talk to start with.

And then came problems. And their beauty brought by Arne Roock. What is the problem? Arne presented a definition of the gap between the perceived and requested state which lead to the conclusion that to solve the problem we can either change our perceived state, our desired state or our perception.

In his talk Arne presented to us why, when a problem is encountered, we should avoid jumping to conclusions and rather stop for a bit and start asking ourselves “why?” and this is a very important question. Not only WHY something is wrong, but also WHY we want to fix it, WHY we failed in preventing a problem. And this is important if we want to be better in the future, not only using the same known solutions for the same known problems appearing over and over again. As a helpful tool to do this he briefly presented the idea of A3 thinking which was more deeply covered later by Claudio Perrone.

The following talk by Martin Jensen treated about culture as THE competitive advantage.

A combination of well understood structure and culture provides internal efficiency and external attractiveness. That’s why we all want to work in companies with a strong culture. Even if you were to copy structures, methods, rules from other companies, you can’t copy their culture, though. But what is culture? Slogans written on walls? Definitely not. It’s about feelings, about what people think when they arrive at work, symbols, about how people behave. But it is not easy, values have to be described and understood, even everyday behavior needs discipline.

After the break we enjoyed a great pecha kuchas session on which Paweł Brodziński, Chris Young, Markus Andrezak and Claudio Perrone shared their experience in very witty way and it was both knowledgeable and fun.

time for seasickness

"I love deadlines. I like the whooshing sound they make as they fly by." Douglas Adams

Photo by Daniel Dubbel

And then came two brilliant talks which actually made me truly depressed because of how often we do fail in preventing our clients from hurting themselves. But the good thing is when you know that the client will ignore your questions and suggestions there are authorities which you can send them to:

No 1. Joshua Arnold – Value and Urgency: The power of quantifying the cost of delay.

In a time when everything is on demand, where every feature is a must, priorities start conflicting. And in this case, every week of delay in launching a project is causing lost of potential income we have to realise that it’s not only the product will start earning later, but in the longer perspective it won’t be able to reach its potential level at all and in the context of periodical products we may not fit the given period of time. It creates to the cost of delay which can be measured per week and nobody likes to watch income passing away. That’s why priorities are important, knowing values – basically measured in the users willingness to pay – and when to start something or when to stop.

Another thing are black swans, those features, there are not many of them, which may have the highest cost of delay. If you identify them, you may be able to make much better decisions in terms of value and urgency.

No 2. Troy Magennis – Risk Management and Forecasting.

Risk forecasting have several sources such as: work, dependencies, throughput. For all of them you need data, and that data comes from the people – a terrible system to manage. People are biased, even expert opinions have to be checked. But even if they bring not necessarily expected data don’t ever embarrass them, you’d destroy any chance of getting reliable data from them again.

Another thing are estimates. We – developers – usually don’t like them, they’re either too optimistic or not acceptable for the client. Knowing the past is very important, you can then make your forecasting more contextual. Even then estimation should be trained and practiced and it should express uncertainty, so instead of using point estimates try using ranged estimates. They are, by the way, much more honest.

The day was closed by Karl Scotland talking about kanban as a whole system which has its interventions and impacts, so we could look at it from very different perspectives. Within interventions we have studying customer needs, delays and feedback, sharing learning by visualizing it, stabilizing the process by wip limiting – for this, the definition of ready and done has to be clarified – and search, measure what may be improved and why. From the impact side – flow, potential and value are the ones, which come.

a brand new day

The next day came with a keynote by Henrik Kniberg and looking into the connection between problems and projects. Quite often people think that to start a successful project you need a good idea but it’s not what it should start from. It’s rather an unsolved problem, which has to be well understood. Then you can think about stakeholders and their needs, and simply iterate until mvp. Henrik was also talking about the distance between makers and users and how it is important to minimize it.

the blame flow

Photo by Piotr Leszczyński

Than, starting from the point where a traditional hierarchy is a flow of blame in which we all are victims, Claudio Perrone showed us that as agile is focused on process and tools, we have to remember that those tools are FOR individuals. And as solving problems is a manager’s and programmer’s everyday life he described how the A3 thinking method may be helpful. It looks pretty simple, just take an A3 shit of paper and follow the exemplary schema – why do we consider something as a problem, what is the current and expected state, what possible steps we may take and what prevents us from doing them. Don’t forget to ask the 5 whys here, it will give you the chain of the facts. Then go to countermeasures and required steps, and check them so you can end with some conclusion about what to do next. Simple, isn’t it? Just remember, do it with a rubber and pencil, check often with somebody you can treat as a mentor, and share what you learnt, make it visible. So it’s not only about solving problems, it’s more about making problem solvers, since it’s not the matter of what we do but what we learn by doing this.

depth is coming

Want to hear about politics in a lean / agile environment? Be aware, Katherine Kirk is on her way.

Corporations encourage psychopathic behaviors on all levels which may lead to psychopathic leadership. As we know, a psychopath doesn’t know compassion, unfortunately normal people can turn it off too. It all may be seen in internal politics. And politics may be even increased by lean and agile. That’s where going from a static structure and hierarchy to a rotational one may help.

Also when using agile tools we have to remember that our approach will change the outcome. Moreover, we have to know that we are in a constant state of delusion, and we have to get rid of the “I know” attitude and simply accept it. What we can do is investigate reality by looking into many sources and notice that intentions are not necessarily coherent with outcomes which causes discrepancies. What’s more, we have to test our ideas, contemplate and seek vipassana insights.

So to cool down politics follow this strategy: take a big breath, equanimity, transform understanding into insights, practice compassion and be curious, patient, sustainable and have a bit of grit.

I’m not sure if the lack of questions was because it was so obvious for the audience or rather that they needed some time to see how deep it was.

captain, captain, may I lead?

"If you think education is expensive - try ignorance." Derek Bok

Photo by Arne Roock

After that experience I choose to listen to Esther Derby talking about leadership on all levels. Lets define a leader. Somebody who says “do what I want”? Nope. Somebody who inspires, uses charisma to encourage people? A bit closer but still not yet. How leadership is defined not only in this talk, is G. M. Weinberg’s definition as a process of creating an environment in which people become empowered. So when thinking about empowering people some things are important. Knowing all what’s and why’s both for the small and big picture, which translates into clarity, creating proper conditions since people want to do their work and organisation should simply support it, and remember about constraints so people know how and when they can act and what they can’t do. And if there is any bounded autonomy it should be articulated so people know how to move. So no matter on which level, steering, enabling or front line all these three: clarity, constraints and conditions have to be known and coherent in length and breadth. It will give transparency and trust. It’s not easy, it can’t be achieved fast, but it’s worth the effort.

In the next talk Liz Keogh took us on a journey through the metaphors we tend to use and how they affect our everyday work. We treat our activities as a substance, as something we can take from one desk to another, put something into it, like into a box, and then move it outside. These are very persuasive. We talk about code quality as if it was a tree which grew here, but what we mean is coding quality is where we would like to improve.

Another thing we do is breaking things into small pieces, and then to even smaller to make them go faster and faster, but after joining them back not necessarily we get the whole thing again. We are definitely more than the sum of our parts. It’s interactions which makes the whole. And interactions between people means relationships. Developers, testers, users should not be separated. They should be connected, they should interact. So, paraphrasing Liz, just think what would happen if instead of working FOR people we would start working WITH them?

ride the power, gambling man

"What’s the worst that can happen?” – Enabling authentic trust at work

Photo by Marc Burgauer

And then, Marc Burgauer asked us: What’s the worst that can happen? He talked about trust, about how it is important, and how it is hard. If trusting yourself is the most neglected type of trust, how we can achieve it with others? A command and control attitude can bring only loyalty through fear and as a discharge of discomfort it leads to the blame flow. Trust is not a prerequisite, it’s an outcome. It comes from everyday commitments and transparency, from sharing your world and meanings. It’s also about failures. If you haven’t failed you haven’t tried hard enough – it’s one of the mottos. If you are not allowed to fail, how hard will you try?

Ride the power was another thing Mark talked about. It’s about the mindset you need, when you are attacked and it’s mainly about four things – re-expressing your position, listing points of agreement, checking what you learnt and presenting your view inside of shared context.

The whole conference was closed by Don Reinertsen’s keynote about variability and robustness. As an opposite to fragility we try to achieve robustness, both, passive and active with more feedback loops. With the latter we still have to remember, that some feedback loops may hide information, so checking more indicators than the primary one is very important. And don’t rely only on watching statistics, go and talk to devs. But this is not everything. We live in a stochastic world and in many cases variability is considered as something bad. By being robust we want to absorb variability. But what if, with more options, we start performing better than without, if more options gave us more outcomes? That’s the real anti-fragility. That’s why we should exploit opportunities, check options and bypass obstacles, simply – be a smart gambler.

back on the mainland

This was definitely a wonderful experience. I was not able to attend all the talks, you know, conflicting priorities. I hope they would be available on videos soon – I recommend you to check the conference page for this. And if you’re still not convinced, check #LKCE14 too.

Some people asked me if there was a sense for me – a developer – to attend such conference or if it means that I’ll start to be a manager now. As for me this knowledge is not only needed for managers, product owners, or CEO’s but for developers as well. We work closely with our clients to embody their idea of a product. We talk with them every single day. Often we intuitively know that what they want us to do is just so completely wrong. And only with our intuition, even experience, we are only devs. Why should one listen to us? That’s why having such a set of tools and knowledge is also important for us. Actually, that’s what I would teach kids at school.

So, I jumped into an unfamiliar boat and I’m really curious where this cruise will take me.

Kasia_RailsGirlsSilesia

A photo by Agnieszka Matysek.

Two weeks ago I visited Gliwice in order to be a coach during Rails Girls Silesia. It was my second Rails Girls event (the first one was in April, Rails Girls Youth in Krakow).  What I like the most in Rails Girls is that it’s so flexible – you can easily make it work for everyone, no matter how much experience this person has in programming.

I think the formula of the workshop is really well-thought-out. There are many approaches, and Agnieszka covered them during the Friday meeting for coaches. When it comes to preparing the environment on girls’ computers, most people use Rails Installer, as the leading platform during workshop is Windows. But it’s also possible to use some online development tools, like Nitrous or Cloud9 (it even offers Vim mode!).

When all is set up, some start with TryRuby or even Scratch, just to make girls familiar with the basics of programming. Some jump to generating the application and its content right away. Another way is to start with static HTML files, and step by step turn them into a Rails application.

How did the workshop look like?

Rails Girls Silesia team prepared everything – They organised the place, provided food and drinks, and made t-shirts and stickers. They deserve a Kudos! The first day we set up an environment, but also had a chance to get to know our teams and decide what to do during workshops.  I was working with three amazing girls – Ula, Iga and Gosia. They decided to create an app which helps in managing expenses and income. They wanted to have multiple users, various bank accounts and categories of operations – sounds pretty advanced!

My team started with tryruby.org. Then we generated an empty app and talked about the basics of Rails framework, its main components and what they are for. After that, we wrote down features we wanted to implement, chose those, which create a MVP (Minimum Viable Product), and prioritised other features. The next step was designing database and relations between models.

The second day we started implementing our application. We generated the first set of components with ‘generate scaffold’ and continued with modifying the content. We had some environment issues, spent some time debugging,  but most of all, had fun watching how the application is closer and closer to its final shape. As a part of the schedule, there was a time for lightning talks and Bentobox (an exercise, which helps to grasp the structure of web application and technologies that are used).

Does it even work?

Yes! But, of course, no one will learn programming in one and a half days. The goal of Rails Girls is not really instant turning girls into programmers, but rather, it is to show them that programming can be interesting, fun and not all that difficult. It is also to give them examples how one can learn programming; There are lots of great resources online, but it’s easy to get lost.

Another important aim of Rails Girls is to show them the path from beginner to developer. Sharing personal experience in creating software, especially how it started, gives participants some practical ideas and encourages them that it’s not that hard.

Among the girls who had programmed before, there was a common problem they met after learning the basics of programming: what to do next? Every software developer was a beginner at some point. I think we all know how important is to get a perspective of things you can start doing next, what application you can write, and how to choose an appropriate level of difficulty.

Why bother?

Being a software developer, when you consider improving your technical skills, and getting more professional experience, non profit work with beginners might not seem the best occasion to become a better programmer. However, creating software is a teamwork; It requires good communication between team members and ability to share knowledge and experience. I think being a coach during Rails Girls event is a great way to acquire and exercise these skills.

During the workshops, I met a high school teacher (her student participated in Rails Girls event, and she wanted to find out what’s this all about). There were students of various faculties: women working in marketing, and graphic designers; Some of them had some background in programming, and some of them didn’t. Working a whole long day with three people, which usually represent totally different environments of knowledge is really refreshing. A bunch of different mindsets and different approaches is an inspiring and productive environment for both participants and coaches.

If you ever have a chance to be a coach, don’t hesitate!

Friday Hug photo by Basia Redyng.

Friday Hug photo by Basia Redyng.

pg_morphHave you ever had a dilemma which to choose – make your everyday life with code easier and use ActiveRecord polymorphic relations or maybe be a more responsible person and think about your database consistency?

Maybe you haven’t, but I’ve actually started thinking about it more. And while I was working with postgreSQL and digging through its documentation once again I convinced myself how powerful this tool is.

Just imagine that you can treat each type from your polymorphic relation as a separate one and set a required foreign key on each. Imagine they all act like they were just one, so you don’t have to change anything in your code and it’s all completely transparent.

So let’s make it happen. Let’s make it possible to add a foreign key constraint to each polymorphic relation type in such a way that ActiveRecord knows nothing about it.

How does it work?

We often use callbacks in our ruby code to catch some events and run some extras. It’s quite natural. But not many developers think about database and SQL as a thing you can also use to program and which may take a much more active role in your application other than just storing motionless data. Here you also can find functions. You can also find callbacks, they are only named differently – triggers.

Using great functionalities such as inheritance and partitioning, it’s pretty simple to create a partition table for each type of polymorphic relations. And you can use triggers to decide which partition to use when you want to add, update or remove a record.

Actually there isn’t anything new about it. All behaviors are well described in the postgreSQL documentation, and many forums and newsgroups show how to use it in the case of ActiveRecord. But solutions require a lot of SQL in your migrations, and what’s worse, very fragile to changes SQL. And we are used to nice, simple and intuitive code, aren’t we?

Simplify your life with pg_morph

While putting all this SQL together, an idea to make it more reusable appeared. Encouraged by a client, I ended up writing a gem to handle all those magical operations I required from my database. So, thank you Aaron for pushing me in this direction, and yes, I finally found time to finish that.

It works as follows – if you have for example the models Comment and Post and both are in polymorphic relation with Like, you can add migrations for them:

add_polymorphic_foreign_key :likes, :comments, column: :likeable
add_polymorphic_foreign_key :likes, :posts, column: :likeable

The first migration adds a new partition table named likes_comments and redirects using triggers all inserts for likes on comments for this partition. The second adds another partition – likes_posts – and updates existing triggers the way that both types of like – comment and post – are being redirected to the proper partition. The main likes table remains empty.

If after some time you find that you don’t need those relations any more, you can remove any of those structures almost as easily as they were added:

remove_polymorphic_foreign_key :likes, :comments, column: :likeable

The word ‘almost’ came out due to that fact that this migration will remove the whole partition which may contain your data. In such a case, the gem would prevent you from doing this and force you to handle that data manually, either by deleting them or moving them into a different table. Maybe it’s a handicap, but it’s better than loosing data by accident.

Not only sugar

It is a very fresh project and there are things I’d like to be handled better.

The most important thing is keeping the main table empty, which would be extremely easy if ActiveRecord wasn’t using RETURNING id statement for inserts. The thing is that this id is taken from the main table, and omitting it by the trigger causes a not very nice nil in the place of the new object’s id. That’s definitely not what we are used to and what we rely on. The bypass for that is either to allow the main table to save new records and then delete duplicates or to use a view of the main table.

The current version of pg_morph uses the first solution, but for the next one the view is planned to be used. It requires a bit more work to make it as transparent as it is right now, so if you’d like to see a new version sooner than later, don’t hesitate to contribute!

www.2014.fromthefront.it

www.2014.fromthefront.it

September is definitely a conference season—most of our office is deserted and the ones left are looking forward to going somewhere themselves soon. Me (Anna), Julia, Małgorzata and Rafael decided to head to the beautiful city of Bologna (Italy) for From The Front and the Temple of DOM conference. Yes, you read this correctly and the name may seem familiar to you. The conference’s main theme was taken from the Indiana Jones movie and it made all of us even more excited. The organizers used the Indy theme to create great sponsor banners, beautiful schedules and dressed up the host, Pierre Spring, as Indiana Jones. Unfortunately even Pierre’s style was not enough to outshine the total awesomeness of the venue—Teatro Duse—which is basically magical. The only drawbacks of the place were the lack of WiFi connection and a bit too much darkness, but we quickly got over it and fully immersed into front-end talks of very high quality.

“It’s a long story. Better hurry up or you won’t get to hear it!”

The first day of the conference started with a great talk by Jeremy Keith who managed to rewire the brains of probably all the conference attendees. What he showed us was unbearable… Did you know that all ducks wear dog masks? Now switching to serious again: Jeremy pointed out that a big, innovative feature of the web is that it’s not a platform and it works anywhere. Or at least it works anywhere until we break it by relying on technology that can fail. Nowadays we often have pages that depend on JavaScript to such an extent that they don’t even display basic HTML elements when it’s disabled. And that’s just wrong. So when building our websites, we shouldn’t be optimizing for any particular browser or system, and we shouldn’t completely disregard old browsers, as we often do. Instead, we should develop using progressive enhancement—start from the basics and add all the new fancy bits later, in a way that doesn’t break the experience for older browsers. As Andy Hume’s quote says, “Progressive enhancement is more about dealing with technology failing than technology not being supported”. But don’t worry, your page working everywhere doesn’t mean it has to look the same everywhere! It probably wont. Jeremy repeatedly asked the question: “Do websites need to look exactly the same in every browser?“. The answer couldn’t be anything else than a unanimous “NO” from the crowd.

Another thing I couldn’t wait for was Estelle Weyl‘s speech, “RWD is not a panacea“. I wanted to hear her newest findings, because I finished writing my master’s thesis just before the conference and used Estelle’s article to support some of my ideas in the process. I was not disappointed. Optimization tricks and the examples of truly slow mobile experiences made me realize how much some things need to change in the way people use images, scripts and all other heavyweight assets. It is not that your customers don’t use your website on mobile because they don’t need it… it is because the experience makes them give up after the first attempt! Thanks to Estelle, Julia already used this statement as a starting point for the conversation of the topic of RWD topic with one of our clients. So we are not slouching! Andre Jay Meissner‘s talk was focused on how to convince our customers to spend money on testing their product on different devices. Well, the option with including a testing budget in agreements sounds very enticing! Andre looks after LabUp!, a non-profit initiative that helps people open their own Open Device Labs. PS. Maybe we should look into opening one here in Cracow? Would anyone be interested in collaboration?

The first person to speak after the break was Gunnar Bittersmann, a very friendly, Polish-speaking (!) German who dedicated his presentation to CSS preprocessors. Specifically, using them to put into effect the OO concept and achieve reusable CSS code without paying for it with bloated, presentational markup. It was helpful and refreshing, as we in Lunar Logic are currently in the middle of preparing new CSS guidelines. Gunnar presented obvious drawbacks of using some of the conventions (such as for example ugly massive class concentration in OOCSS’s HTML) and reminded us that we should remember about separating Presentation (CSS) and Behaviour (JS). It might seem obvious to some of developers, but unfortunately is quite often a forgotten rule and we end up seeing awful, unnecessary style modifications done with JavaScript. For sure we will try Sass placeholders, mixins and extends mix after hearing your reasoning, Gunnar!

When it comes to responsibility, we might often try to escape it and just create websites that look great but are not bulletproof. Sally Jenkinson reminded us that it is our responsibility to get everything right and strive for the best experience possible for our users. The last, but not least, presentation this day was prepared by Owen Gregory. His speech was perfectly edited and prepared, I felt as though I was watching an actor’s performance on the stage of Theatro Duse. The main purpose of this talk was to encourage all writers (me writing the blog post, you writing a tweet, someone writing a Readme file for GitHub) to master the art of composing beautiful pieces of written word. The slides were minimalistic and well thought, speech encouraging and sophisticated. I am glad that such people find their way to front-end conferences and inspire to strive for perfection. After the presentations we headed for the conference party. Though the place was not perfect (only one stand with 3 types of beers, plastic cups, outdoor and a bit weird) we could use the opportunity to share our ideas about the conference with the speakers themselves and have some time to talk to people responsible for oragnizing the conference. It was cosy!

“You’re meddling with powers you can’t possibly comprehend.”

Jon Gold kicked off the second day of the conference with his anti-unicorn campaign. Well, he wasn’t exactly against real unicorns or people in programming world who are so rarely multi-skilled that they are called unicorns, but was trying to go against applying the term “unicorn” to people. He also pointed out the difficulty in listing all our areas of specialization (because there is no word to name all these jobs at once, this was Indiana Jones’ problem as well: “Professor of Archaeology, expert on the occult, and how does one say it… obtainer of rare antiquities.”) and proposed to call ourselves web makers and creators of the web. This actually makes a lot of sense (and sounds less stupid than the name of a mythical horse with a horn in its forehead) to put as my job description. I also got to realize that it is ok to be good in many things as well as it is ok to be excellent in one thing. By the way Jon is excellent in speaking and good in many other things so I don’t know if it is still fair. ;)

Jenn Lukas rocking the stage.

Jenn Lukas rocking the stage.

Later the same day the three awesome women had their time on stage: Jenn Lukas, Sara Soueidan and Ulrika Malmgren. Each one of them rocked so much that any person implying that women don’t fit IT related jobs should shut up and crawl under their rock to hide. Jenn told us about meaningful animations and problems with touchscreens that might be a result of using :hover. Her talk was funny, cat-powered and enlightening. I suggest to watch it when the video appears instead of reading about it. Actually you should watch all of the videos to get a whole lot of inspiration from the conference. And probably the most inspiring was to see Sara Soueidan starting her presentation even without her slides (due to technical problems) and knowing by heart what was on them. It was amazing! I haven’t seen anyone being so professional in a long time! Her presentation was the greatest concentration of all knowledge you might ever need on the SVG topic and demanded a standing ovation. You can check the slides  yourself, but be sure to also watch this amazing performance. The third of rocking women was Ulrika, who presented us with very practical and easy to try techniques for testing our projects. From doing the regular checks and mind maps to seeing moonwalking bears, Ulrika was our guide in the topic of testing for 45 minutes and probably saved many apps from being buggy in the future.

The last presentation of the conference was by great Christian Heilmann. Probably its title (“Rubbing the Sankara stones the wrong way”) gives you a hint already—it was full of Indiana Jones references. I must admit that the short clip from the movie narrated by Christian was probably the funniest thing I have seen at this conference. The talk covered many problems, that web struggles today and there is a wonderful blog post with notes on that by Christian himself, so I don’t think any additional comments are necessary.

“And what did you find?” “…Me? Illumination.”

To sum up, the conference was great. The talks were truly of high quality, the venue was beautiful and made it easy to approach people you really wanted to talk to and even the conference food was delicious (vegetarian option included! Me, Gosia and Julia don’t eat meat and didn’t have to starve as it happens sometimes). I will definitely try to head there again and sight-see Italy some more. Thank you all for such a great time and the unimaginable amount of knowledge we had a chance to grasp.

Ania, Gosia, Rafael and Jul waiting for the conference to start.

Ania, Gosia, Rafael and Jul waiting for the conference to start.

And it begins

Baruco 2014 has come and gone in a flash. Like the saying goes, “Time flies when you are having fun”. And so it was; The talks, the introductions, the discussions, and the lunches were all fantastic.

Baruco1

 

First day of talks

The first day of talks started off with the creator of Ruby himself: Yukihiro “Matz” Matsumoto. He discussed a new version of Ruby called mRuby that is specifically designed for embedded systems. He went into details about how he managed to get a grant from the Japanese government to work on this project. He expressed his desire to get Ruby code into realms other than web development, specifically embedded systems. mRuby will have Ruby version 1.9 compatibility, but will have a less functionality than MRI (which he says should be called CRuby, since he doesn’t even actively develop for it anymore).

Next was a talk by Piotr Szotkowski, which emphasized the fact that Ruby has a lot of very useful functionality built into its standard libraries. He went through a few examples of using some of this functionality by showing their uses in a gem he created called signore. The first thing he showed was a module called ‘abbrev’ which is called on an array of strings or symbols and returns a hash that maps each possible abbreviation to the string it matches to. This is something that can be useful to create a tab complete in Ruby. He then showed how using the standard library you can create a simple HTTP server using WEBrick. And a simple TCP server using the ‘gserver’ module. He also talked about the ‘prime’ module which simply checks if a method is prime. He wrote his own method, and segued into the ‘benchmark’ module by showing that the prime method in the Ruby standard library actually took 3 times longer to run than his method. He showed the ease of use of the ‘benchmark’ module which would later be used extensively in another talk.

Next, Pat Shaughnessy took us into the world of ActiveRecord, to pick it apart and find out how it converts and everyday query like ‘User.where(email: “email@email.com”).first’ into SQL statements, and then how SQL works in order to find this specific record. He showed how SQL will normally do a lot of work by doing a sequential search to find all records that match that criteria then chop off the first one and return it. He showed how adding indexes to things that are commonly searched on (in the above case it would be the user’s email) to speed up the SQL query. With an index it does a binary search through the records instead. He also showed how when there were a lot of records SQL created pagination so that it could work with blocks of records in memory rather than from the disk.

The other site of programming - Leon Gersing (Ruby Buddha).

The other site of programming – Leon Gersing (Ruby Buddha).

Later on in the day after a fantastic lunch (a full 4 course meal: salad, paella, fried calamari, wine, and dessert) Leon Gersing AKA Ruby Buddha, gave a talk that was pure motivation. His point was to step back and view problems in a different light. I have found that that has always helped me. Sometimes a good nights sleep, or a game of Dominion, or even just a quick coffee break can clear your mind, and let you rethink something complicated.

The last talk of day one was by Ryan Levick, who proclaimed his love for Ruby on multiple occasions during his talk. He used the concept of static typing, which Ruby does not have to make one simple point: Ruby is not the right tool for the job every time. There are other languages out there, and even though Ruby is amazing, it is not always the language you want to, or even need to use.

Second day of talks

The next day, we jumped right into talks at 9:00. The second talk of the day was one about writing fast Ruby. Erik Michaels-Ober talked about some subtle differences in Ruby syntax that can improve performance in your app. His goals were simple in this presentation: optimize at code level, improve performance by at least 12%, and maintain high quality, readable Ruby code. Here is a quick summary of what syntax is faster:

  • ‘yield’ is 5x faster than ‘block.call’
  • (1..100).map(&:to_s) is 20% faster than (1..100).map { i i.to_s }
  • ‘flat_map’ is 4.5x faster than something.map.flatten(1)
  • hash.merge!(e => e) is 3x faster than hash.merge(e => e)
    • This is assuming that having this hash be mutable is safe
  • hash[e] = e is also 2x faster than hash.merge!(e => e)
  • hash.fetch(:something) { :default } is 2x faster than hash.fetch(:something, :default)
    • This is because the block is only evaluated if the fetch doesn’t find anything
  • ‘sub’ is is 50% faster than ‘gsub’
    • Most of the time we only want to sub the first thing anyway!
    • ‘gsub’ evaluates the whole string (global sub)
  • ‘tr’ is 5x faster than ‘gsub’
    • So why does everyone use gsub?
  • Using exceptions for control flow is 10x SLOWER than a if statement!
  • A while loop is 80% faster than an each_with_index loop

After this Jason R Clark took us on a tour of very useful Ruby debugging tools. He started with the stuff already built in. First up being the infamous ‘puts’ which, in the grand scheme of things can have its uses. He then showed how the ‘caller’ method works, which returns a stack trace where it is called (super useful). He also showed how the ‘$!’ variable works. Whenever it is called in the code it returns the exception that is currently being handled. In the Ruby community we use a lot of gems, and sometime our errors stem from them. So debugging a gem could potentially be very time saving. Jason showed that you can open up any gem files on your current Ruby version very easily by writing ‘gem open ’ or ‘bundle open ’ in the command line. You can then make changes to the code to help debug. After that you can run ‘gem pristine ’ to return the gem to its original state, basically wiping all the changes you made. This will make sure you don’t accidentally commit any debugging changes to your gems. He also spoke about a few useful gems, one of them being Pry. Pry is a gem that you can use to create a kind of breakpoint in the code that then opens an interactive console at that point. You can check instance variables, write code, and move around in the file using unix-like commands (‘ls’, ‘cd’, etc). There is also a bunch of useful pry addons to step forward and back in the code so you can see in more detail what is going on.

The final few talks of the day went over various topics, from security to monads, and ended with robots. And dancing. With them. The last speaker had more of a demo than a talk. Julian Cheal showed some of the interesting things you can do with Ruby and robots. He showed the some things you can do with Ruby code to control everything from LED lights to drones. He even set up two flying drones to be controlled by a DDR dance mat and his phone. It ended up being a very spectacular way to finish off the conference.

Codegram should get a gold medal for the great conference organization as well as for the good foosball game. PS. We have the whole year for the training and we will beat you at #Baruco2015 :)

Codegram should get a gold medal for the great conference organization as well as for the good foosball game. PS. We have the whole year for the training and we will beat you at #Baruco2015 :)

And it ends

All in all the conference was fantastically organized, a pleasure to be at, and a great learning experience. The speakers were all great, informative, and motivational. The lessons were practical, and easily applicable. The problem I have with most talks at conferences is that the ideas are good but they are sometimes difficult to incorporate in your everyday work. This was not the case with a lot of the stuff this year. I am very happy about spending some time in Barcelona and attending this conference. I would like to thank everyone that made it possible, namely Codegram for putting in so much effort to organize such an amazing conference!

Baruco 2014 is over. I was lucky to be there and soak in all that it offered. Let me share some of my thoughts.

Tapas

I arrived to Barcelona early enough to take part in one of the workshops prepared by organizers as a kind of a side dish to the main event. “Test drive a browser game with angular” held by Test Double crew – Zach Briggs and Todd Kaufman – was my choice. I figured this would come useful for my day-to-day work, as angular is our first front end choice here at Lunar. Although in technical terms I did not learn anything new about the framework – the code written during workshop didn’t go past simple controller, few ng-models and some ng-repeat here and there – there was a great deal of added value in what Zach and Todd shared between the lines. They took a stand in the ongoing debate on vitality of TDD, and their approach seems to be moderate yet reasonable – test drive your business logic, test drive complex parts but don’t be dogmatic about it. There are uses and misuses. And it’s crucial to remember that TDD is not only about feeling safe and having your back covered but more importantly is about design emerging from your code driven by tests. Having this in mind Zach led the workshop during which we wrote a pure JavaScript app entirely separated from angular, test driven with jasmine. Tasty!

First course

Baruco_tomrusThe main event started with a blow – dry ice, gig-like atmosphere, and all the conference heroes presented in a stunning animated intro. To be honest – I had mixed feelings about Baruco’s marketing theme. Superheroes? Seriously? You can hardly find a more exploited topic in programming world. But the Codegram crew had really pulled this one off. It was the perfect delivery with all the little details that made it work. And you could easily tell how it boosted everyone’s morale – the speakers’ and the audience. Being part of this made me feel great. But this is ruby community – that’s probably the right way to feel, isn’t it?

On the first day we had a pleasure to hear Yukihiro Matsumoto speaking about his current undertaking – mRuby – the smaller version of ruby interpreter dedicated for embedded systems. He made a nice run through various ruby implementations, dismissing each one of them as not good enough for embedded systems. CRuby* for instance is too much POSIX based. It was interesting to hear that with mRuby it may be possible to write software for vending machines, home automation or even some simplistic versions of satellites. Being a language designer as Matz called himself (as opposed to a programmer) seems like a cool job.

Quote: If you launch rockets I hope you don’t use ruby

Next came Piotr Szotkowski, whom I remembered for a great talk about Bogus he gave at this year’s wroc_love.rb conference. I was expecting yet another tasty technical presentation and I was not disappointed. Piotr talked about powerful, but somewhat hidden features of ruby stdlib. He showed good code examples and gave sound pieces of advice.

MIL**: Study enumerable module. Then study it again.

Quote: Don’t return nil from your methods.

Warmed up by @chastell, we were ready for heavy artillery. Pat Shaugnessy entered the stage digging deep into ActiveRecord, Arel and Postgres internals. He went through stuff like AST, visitor pattern, yak parser, btree algorithm, db indexes implementations and alike. Demanding topics, yet Pat managed to keep audience interested. Be sure to check out his newest book Ruby under microscope.

MIL: Learn the stuff down there, know the internals of the tools you use at least to some level – it will make you a better engineer.

The only lady among the Avengers – Emily Stolfo – had a much less technical talk on responsible release process.

Emily cleverly laid out her ideas dividing them into 3 main areas: maintaining simple API, clear communication and semantic versioning.

Check out my notes to see more details or watch Emily’s talk if your are responsible for any gems or libraries. I will definitely come back to it when the time comes for me.

MIL: Think of your API as a user interface and provide good user experience to establish and maintain what’s most important – trust.

Speaking of responsibility, enter Jose Albornoz with his talk about … irresponsibility. Coincidence? I don’t think so.

The youngest of the speakers took us on a trip through his personal experience of, what he called, a conference driven development.

In an entertaining way he described how he created the first ruby Gameboy emulator from scratch just to find out that it was… not the first.

Technical details were a bit dry but the main point was clear – find time to write irresponsible code just for the sake of it, learn and embrace the fun of coding.

MIL: You don’t have to know the difference between bits and bytes to write code.

So we are sitting there, half way through the day one and then comes this guy – Leon Gersing – starting with this quote from Las Vegas Parano. Yeah. This is going to be good. It’s impossible to sum it up – just watch it when it comes out, it’s worth your time.

MIL: Thought can’t be replaced by a process. You can’t adopt the culture without understanding it – this goes especially for Agile philosophy.

Also: The Perfect High

Main course

The last talk of the first day by energetic Rayn Levick, corresponded well with the first talk of the second day delivered by Brian Shirai. Both gentlemen talked about a recently hot topic – static typing and type safety, and whether we need it in ruby. Interestingly enough Rayn and Brain found themselves on the opposite sides (mark, not ends) of the problem’s spectrum, former pointing lack of types as a ruby’s weakness, the latter explaining how they can do more harm than good. If I was to judge – Brian presented a much stronger case. His talk was thorough, contained many cross references and left me with a huge material for further study. Having it difficult to grasp all on the spot, I will be definitely coming back to it.

MIL: Types don’t fit where there is much interoperability (objects). Programming is a behavioural science. Proposition as types = logic as types and it is not a good idea because we can’t really use logic well.

Having such a good start into the second day, we were about to see even better stuff. Erik Michaels-Ober enchanted us with beautifully illustrated and content heavy story of how optimizing ruby code for performance can be fun and not code obscuring. It turns out you can write fast and pretty code at the same time. Lots of good tips, simple code examples, each one backed by benchmark analysis. Watch the slides or check my notes for details. Fully professional talk from every standpoint. 10/10. If I had to choose only one of the talks to watch – this would be the one.

MIL: Performance optimizatoin can be fun and even have a therapeutic effect.

Quote

Next up was Jason Clark on various debugging practices and tools. Well prepared talk, spanning across the entire scope of debugging options out there. I will be definitely reaching to this presentation in times of need.

There couldn’t be a ruby event without at least one talk about services. Evan Phoenix filled this spot nicely balancing out pros and cons of services, leaning towards conclusion: use services, when the problem becomes too complex to grasp. Definitely worth watching if you never used SOA or felt like you used it wrong. What I liked most is that Evan pinpointed some easy to follow good practices, e.g.: never share AR model between services, start with services that map to boxes drawn on the white board, do a fire drill once a month, have one convention of how services should talk to each other – set it and don’t discuss it too much, just like you don’t discuss method calls.

MIL: Every sufficiently complex problem domain will require an app larger than human cognition mass threshold, and team’s threshold is smaller than a single developer’s.

Quote: If an app is never deployed, was it written?

Did you hear about the Monad tutorial fallacy? In short it states that once you understand what a monad is you are unable to explain it to others. Seems like this no longer applies thanks to Tom Stuart (@tomstuart), whose presentation on Monads implementation in ruby left me in awe. Unfortunately I can’t explain it to you. Watch this talk when available, I dare you.

Book tip: Understanding Computation.

At this point my brain was swollen and we had still 3 more talks to go. I didn’t take much notes from them so let me just really quickly summarize: Matt Amonetti told a colorful story about pitfalls of cookie based authentication. Well prepared speech, definitely worth watching.

Tom Stuart (@rentalcustard) gave a somewhat vague talk about simplistic, 3-legged construction of lisps interesting features of Smalltalk, Lisp and Bash, that surprisingly make them impractical. As to why that is – he left us with an open question. Or I might have gotten it completely wrong. Last but not least we had some fun with blinking arduinos and drons flying over the stage in the rhythm of psycho gangam style. All of that controlled by ruby programmed PS dancing mat with Julian Cheal jumping on it. Not everything worked as Julian had planned, but nevertheless the audience was in sheer joy for good couple of minutes. Perfect ending.

Dessert

Conference organisers have done a tremendous work making attendees (around 500?) feel taken care of. The auditorium was comfortable, lunch spots offered great Catalan food and evening parties were fun. Conference announcer Jeremy Walker also didn’t stay behind, even managing to do some live coding on stage.

My thanks go to all the heroes and Baruco team amongst them. Great job. See you next year!

Oh. And Barcelona, you too.

  • Matz argumented that we should probably stop using MRI name for his first interpreter and switch to CRuby.

** Most Important Lessons

Our organizations would benefit if people shared more feedback with each other. In fact, I can hardly think of any company where sufficient peer to peer feedback isn’t an issue. On that account Lunar isn’t any different even though we treat transparency really seriously. We would appreciate more peer to peer feedback on a regular basis.

At the same time sharing feedback isn’t an easy task. Frequently it requires us to move out of our comfort zones. In the end, we just don’t do that.

Feedback week at Lunar LogicOur solution for that was the feedback week. It is a very simple, safe to fail experiment. Anyone who wishes to participate tapes an envelope with their name on it to a wall in the common space and everyone else is invited to write down feedback for that person and put that in the same envelope.

To make it a really safe to fail experiment there are a couple additional constraints needed. In our case the whole thing was completely opt-in. If you didn’t require feedback no one forced you to participate. Also, only positive and supportive feedback was allowed.

That’s it. Told you. It’s simple.

Oh, and it lasted a little bit more than a week, that’s why we called it the feedback week.

Now, before I’d go further there was obviously a hypothesis behind the experiment. I assumed that if we provided an environment where we don’t force people to move out of their comfort zones they would share something new with others. At the same time as long as anyone put a single piece of paper in a single envelope I’d assume that it provided value in terms of generating more peer to peer feedback at Lunar.

One could question the rule about only positive feedback being allowed. One view on the usefulness of feedback is that it’s easier to build on ones own strengths than addressing weaknesses. From that perspective supportive feedback bears more value than critical information.

Personally, I don’t subscribe to that view. At least not as a general rule. I, for one, learn much more effectively from critical feedback. I do understand though that people have different learning patterns and for many supportive feedback is exactly what they need. For the rest, if not anything else, it feels good to hear good stuff about ourselves so there’s no potential downside really.

Another thing was that making the experiment safe was more important for me than maximizing feedback it produced. After all, we played with people’s behaviors and organizational culture. There’s no fallback strategy for such stuff.

Anyway, you want to like the results, right?

Almost everyone participated in the experiment. That validated the assumption that we strive to be getting more feedback. Then, talking about the volume of feedback, we could base it on the number of notes put inside the envelopes. There were a lot of them. It wasn’t just one here and there. People really did their homework and shared. A lot. (By the way, you guys are awesome! Thank you!)

Did it work?

Sure, it did. This isn’t the best part though. What I quickly realized was that many of us just used the feedback week as a catalyst to do the ultimate thing: go and share feedback face to face. One thing was that if someone wanted to go with something critical, well, the envelopes wouldn’t work because of the rules. Another thing was that some of us realized that they don’t need this artificial mechanism and they feel comfortable enough to share feedback the way it is supposed to be shared.

This was the real magic of the feedback week. It didn’t merely act as a one time event. It influenced our behaviors. And again, not only for a short while but in the long run. After all, once you learned that sharing feedback isn’t really as scary as you thought and people react to that really well, you will be much more likely do that again.

Oh, and by the way, many envelopes are still taped to the wall even though the feedback week is done. What’s more, since we are done with the experiment everyone sets their own rules, like “here goes everything that you can’t tell me face to face.”

Feedback Week The best part about that tool is that it is applicable in pretty much any context. You can do that company-wide, but also in the context of a team or even individually. In fact, it would even work in a low trust environment. Of course results wouldn’t be nearly as good as in our case but you’d still get decent outcomes.

Even though I’ve pulled down my envelope I know there will be time when I’ll put it back for a while. I will be doing that till everyone feels comfortable sharing any crazy feedback they might have. It won’t happen overnight. In fact if it ever happens it would blow my head out.

knapsack-logoA year ago we started working on a new project. We spend a few months adding new features and making old code better. The build was green and fast. The time flew by and our test suite grew – a natural thing, you would say. Our tests suite took about 12 minutes. When we were working intensively a few new commits in a repository causes long delay before we got feedback from our Continuous Integration server. That was annoying.

Balancing

An obvious thing happened: we decided to add extra CI nodes to split our tests across them. The simple solution for the split was assign an equal amount of test files per CI node. Some of test files, like unit tests, were super fast and others, like end to end tests, took much more. The simple split wasn’t smart. We ended up with three fast CI nodes and one very slow.

without_knapsack

It was sad seeing three CI nodes wasting their time.

Time is your friend

We tried a few solutions but calculating time was the best one. I started working on a gem called Knapsack. The name is based on the knapsack problem. :) This gem helps parallel specs across CI server nodes based on each spec file’s time execution. It generates a spec time execution report and uses it for future test runs.

Don’t waste your CI nodes’ time

Now with Knapsack our test suite is split across CI nodes in a more efficient way. Here is an example how time execution looks for each CI node with Knapsack.

with_knapsack

Get started with Knapsack

Add the gem to your Gemfile and run bundle command:

gem 'knapsack'

You need to bind the knapsack rspec adapter at the beginning of your spec_helper.rb:

require 'knapsack'
Knapsack::Adapters::RspecAdapter.bind

And the last thing, which is to edit Rakefile and add these lines:

require 'knapsack'
Knapsack.load_tasks

Generate time execution report for your spec files

After you add knapsack to your project you need to generate report with spec files’ time execution data. You should run the rspec command on one of your CI nodes.

$ KNAPSACK_GENERATE_REPORT=true bundle exec rspec spec

It will run all your specs and generate a file called knapsack_report.json. The contents of this file will be output at the end of the test suite. You then need to commit knapsack_report.json into your repository. Knapsack will use this file for better test balancing across your CI nodes.

This report should be updated only after you add a lot of new slow tests or you change existing ones which causes a big time execution difference between CI nodes. Either way, you will get time offset warning at the end of the rspec results which reminds you when it’s a good time to regenerate the knapsack report.

Using knapsack on your CI nodes

Run this command on your CI server where CI_NODE_TOTAL is the number of nodes, and CI_NODE_INDEX is how the CI server starts counting nodes (usually 0).

$ CI_NODE_TOTAL=2 CI_NODE_INDEX=0 bundle exec rake knapsack:rspec

The epic split: no problem

We are happier now because our CI feedback is much faster and we know that at any time we can add another CI node and have the epic spec split out of the box thanks to Knapsack.

There is always room for improvement

Do you want to help? There are a few things we can improve; like adding adapters other than RSpec or just improving the spec assignment algorithm. Feel free to fork Knapsack or just give us your feedback. Many thanks!

Oh, and one more thing, check the read me because Knapsack has even more features than described here.

a9n

I believe that many of you (ruby devs) have seen a file in one of your projects with with a lot of constants and full of if Rails.env.production?. It’s a nightmare. Even worse, it’s an example file that’s supposed to have a local gitignored copy. And because their content is usually not being verified, getting an uninitialized constant error with no clue of what happened and becoming annoyed is just a matter of time.

Another evil thing is the fact that all types of configuration items (access keys, mailer settings, etc.) are often split across various files and classes. Instead of having their dedicated place and being easily accessible, they mess up the code and their maintenance is a nightmare.

Unfortunately, Rails doesn’t offer any way to manage and verify custom configurations. Rails 4.1 introduces config/secrets.yml, which partially solves the problem, but it’s not enough to keep all the configuration maintainable.

I came up with my own solution called a9n (a numeronym for application) to keep my ruby and rails apps configuration easily maintainable, verifiable and clean.

Sources and gem

https://github.com/knapo/a9n

https://rubygems.org/gems/a9n

How it works?

a9n expects configuration.yml.example and/or configuration.yml file in the app’s config directory. You can have both – configuration.yml.example tracked by git and local configuration.yml ignored by git) – or just a single configuration.yml tracked by git.

If both files exist, content of configuration.yml is validated. It means that all the keys from the example file must exist in the local file, otherwise A9n::MissingConfigurationVariables is raised with information about missing keys.

All configuration keys are accessible by calling a method on an A9n. Let’s say you have:

defaults:
email_from: 'no-reply@knapo.net'
production:
app_host: 'knapo.net'
development:
app_host: 'localhost:3000'

So you can access the config by:

A9n.app_host # => `knapo.net` in production and `localhost:3000` in development
A9n.email_from # => `no-reply@knapo.net` in each environment

Custom and multiple configuration files

If you want to split a configuration, you can use multiple files. All files from config/a9n are loaded by default, but you may pass custom file paths as an argument to A9n.load, e.g. A9n.load(‘config/facebook.yml’, ‘config/mongoid.yml’). In such cases config items are accessible through the scope consistent with the file name. E.g. Having config/a9n/mandrill.yml:

defaults:
username: "knapo"
api_key: "1a2b3c4d"
production:
api_key: "5e6f7g8h"

You can access it by:

A9n.mandrill.username # => `knapo`
A9n.mandrill.api_key # => `1a2b3c4d` in production and `5e6f7g8h` in other envs

Capistrano

If you are using use capistrano and you feel safe enough to keep all your instance (staging, production) configuration in the repository, you may find it useful to use capistrano extensions.

Add an instance configuration file e.g. configuration.yml.staging, configuration.yml.production (NOTE: file extension must be consistent with the capistrano stage) and add

require 'a9n/capistrano'

to your deploy.rb file. This way configuration.yml.[stage] overrides configuration.yml on each deploy. Otherwise you’d need to store configuration.yml in shared directory and link it in a regular way.

More details and the setup instructions are available on github.

High QualityI was wandering around Cascais to find a place for dinner. Not just a place – it’s pretty easy to find a place in touristy locations – but a place that serves good food. I ended up in an alley with four restaurants side by side. I didn’t make my choice based on the content of the menus displayed outside. I used the guidance of reviews I found in the internet. In fact, the reason I ended up in that alley was because I was seeking that very place.

Later, when I was eating a gorgeous sea bass I realized that I hadn’t a faintest idea of what the other three restaurants charge for a similar meal. The neighboring places might have been cheaper, even significantly cheaper and I wouldn’t have noticed.

I guess price is not the only factor one considers for when they want to have a good meal. Of course, I looked at the prices before committing, aka ordering a meal, but an idea of cross-checking the prices with other restaurants hadn’t even crossed my mind.

I wasn’t there for a cheap dinner. I was there for a good dinner.

Another layer to that story is why I was in Cascais. It is where this year’s Kanban Leadership Retreat was held. This event for me is the one where I start planning my travels every year around. The quality is great, the setup is perfectly adjusted to its goals and the people who pop up are the right people.

Is it cheap? No. I wouldn’t say it is super-expensive either but it’s definitely not cheap. Does price matter for me? Not until it is outrageous.

I’m not here for a cheap event. I’m here for an awesome event.

Now, why would I bore you with stories about eating great seafood in beautiful Portugal?

The reason is that it reminded me painfully of the number of sales conversations I had with our potential clients. The focus of those calls was based on our rates. What’s more, the rates seemed to be the only key factor for these guys in which to base their decision around.

This is exactly the wrong discussion to have.

It’s like looking for a good dinner and choosing the cheapest place. It’s like looking for a good event and choosing the least costly. Do you do that? So why, the heck, would you do that when the future success of your product was at stake?

Don’t get me wrong. I’m not saying that price is irrelevant at all. We all have some sort of budgets. I wouldn’t pay 200 EUR for my dinner only because the sea bass was delicious. What really matters here is the value for money. What is value for money? This is the ultimate question which needs to be answered. This is the parameter that we should be focused on.

Interestingly enough, when it comes to feeding ourselves we are pretty damn good at it. We don’t eat crap food only because it hits our pockets the least. Occasionally we’d go for something fancy even if it is expensive. We can even dynamically balance the tradeoffs we make across cost and quality dimensions depending on the context.

On occasion we’d take convenience into consideration and eat whatever is available at hand. Sometimes we may not be able to afford what is the best value for money as it would simply be too expensive. From time to time we’d experiment and go with a risky option which we can’t easily assess its value.

Now, the thing I don’t understand is why people turn their common sense off when it comes to building their products. Imagine that you have an idea that you believe in. You took effort to get funding or you fund it by yourself. Alternatively, you may act as a proxy for someone funding the whole thing.

Do you really want the cheapest possible delivery? Are you aware of all the tradeoffs you are taking in a package? And I’m not talking about quality or lead time only, but also about all the interactions and collaboration.

Are you OK to get the fast food of software development? If so, that’s perfectly OK, but I’m afraid we are not the right partner for you.

If you, however, want to get good value let’s discuss how we work so that you achieve a quality outcome. What’s more, I would encourage you to run an experiment. Don’t invite your whole family and friends to our restaurant for your birthday party. Just stop by for a quick light lunch. You’ll learn whether you like what you get.

When I say an equivalent of lunch I mean just few weeks of work. Define which feature or features will be required in every single crazy scenario you can think of. I don’t ask you to define an MVP. Just something that you’d start with. We sometimes label it as Minimal Indispensable Feature Set. Once we’ll have built it, you will pretty much know whether you want to continue. And so will we.

By the way, a conversation about the rates makes perfect sense in that context. Except, you may realize that in the same way as it was with my dinner it may be fairly irrelevant as long as it is reasonable. Not reasonable as in “compared to the cheapest fast food around” but reasonable as in “seems like a fair price for what I expect to be a good dinner here.”

Another interesting context to this whole discussion is that the consequences of choosing a bad restaurant aren’t nearly as painful as consequences of choosing a bad partner to work with on your software product. Yet, it seems to me that software vendors are most often chosen for all the bad reasons. No wonder that what is acceptable quality for software would be considered appalling in pretty much any other context.

There is one more thing. When looking for a restaurant it isn’t unusual that we use higher prices as an indicator of quality. After all, if the restaurant had exactly the same quality as all the other places they wouldn’t be able to consistently charge more and keep the business running. They must be doing something better, right?

The same story goes with how busy a place is. The more people sitting around the tables the more likely that they serve good food.

Keep that in mind for the next time when you’re looking for a partner for something lasting and costing a few orders of magnitude more than a dinner. Like your web application for example.

Finally, please don’t treat that as a marketing message. Even if you consider us as your partner and you use that guidance to choose another partner, that’s perfect. Any business relationship is healthy and sustainable only as long as it is win-win. What I hope for is that this post is helpful in terms of finding those win-win relationships.

The Idea

It’s hard to find a forum or any other social site that wouldn’t let you to set up an avatar to establish your own, unique, identity. Sure, aside from making people’s profiles unique, it also has a use value. For example, in a Kanbanery board with lots of users it would be hard to distinguish them from each other. No doubt, not having avatars in place would lead to a mess which, to be sure, is not productivity’s friend. To let users add their avatars to Kanbanery, we integrated Gravatar. It’s a service that lets you create an account using your email address and set an image that will be used as an avatar on any site that has Gravatar integration in place. Cool, isn’t it? Ummm, not really.

The Problem

Avatarly - an alternative to GravatarPractice shows that most of the users (ok, at least Kanbanery users) have never heard of Gravatar or, simply, don’t want to use any additional services like that. It leads to a situation in which customers get confused – where’s the ‘upload avatar’ button? At some point we realized that there are tons of Kanbanery boards with many users who don’t have their avatars set up, all using the default one and not liking the service because it had ‘no possibility to set up an avatar’. Of course, Gravatar lets you to set up your own default avatar, but it’s not the solution – they still would be the same. Summing this up, i still think that Gravatar is a cool idea, but it lacks popularity (maybe now, when it belongs to WordPress that will change at some point?) and that, in my opinion, makes it nearly useless for most of the users. So, what to do if you, for any reason, don’t want to let users upload their own avatars?

The Solution… maybe?

Avatarly in Kanbanery

Here’s how it looks in Kanbanery.

Use Avatarly! What’s that? When trying to solve the problem for Kanbanery, for a long time I hadn’t realized that I used to have it in front of my face for the whole time. If you’re using Gmail, you’re probably familiar with simple avatars containing a colored background and your initials only. I was pretty sure that somewhere there must be a gem letting me create avatars like that in any Rails application. Ok, now it’s there, but it wasn’t there when I was looking for it. I had to create it. I called it Avatarly – you can find it on RubyGems and the source code is available in GitHub. Basically it doesn’t do anything aside from taking any string provided by user and making a Gmail like avatar out of it, returning a image that you can save or use ‘on the fly’. As simple as that, but you’re not limited to defaults. You can set your own background color, font and its color and size. As for the text it can be any string, including email addresses.

Isn’t that cool? Notice that it doesn’t force you to give up on using any other avatar ‘providers’ (like Gravatar, doh) if you want. There a simple demo on Heroku, so you can give it a try.

Sunny internshipThink of a company that cares about technical skills but pays even more attention to how we act as a part of a team. Think of a company where collaboration is paramount, helping others is a default option and competitiveness isn’t welcome. Think of a company that, despite having pretty damn good developers, understands that writing more code is rarely, if ever, the best strategy to please their clients.

WELCOME TO LUNAR LOGIC

We are looking for great candidates for our summer internships*. Obviously, we are looking for people with decent Ruby on Rails, Javascript and/or iOS development skills, but those are not necessarily the most important qualifications. Most of all, we are looking for people who care about teams they’re a part of and the projects they build. This means being open to a much broader context than just coding.

And yes, we’re going to help you on that journey and we promise it’ll be a lot of fun too. If you feel like you want to be a part of all this apply here.

*Internships are planned for 3 months (July to September) and are in Krakow.

What we expect:

  • Decent RoR, js and/or iOS development skills
  • Passion for learning
  • Empathy and interpersonal skills

What we offer:

  • Support on your learning path
  • An unusual work environment with: kudos, badges, board games, etc.
  • A lot of fun
  • Paid

Want to become part of our team? Apply here.

We don’t believe that money is a motivational factor. We don’t believe that bonus systems work either. Which means that we don’t have monetary bonuses. Sort of, um…, actually we do. Are we being hypocritical? No, not really. Here’s why.

20zlWe have bonuses only on one occasion per year – just before Christmas. In fact, they are even called Christmas bonuses. The goal of the bonus was never connected to appraisals or performance at all. It’s just a nice gesture before Christmas.

That was reflected in how we shared these bonuses. Everyone who spent an entire year working with us, on a full-time basis during that period, would get the same amount. By the way, this isn’t a huge pile of money. Part-timers or people who joined throughout the course of the year would get their piece of the pie proportionally. The position of the employee isn’t reflected at all in this. Neither is seniority, salary, performance nor anything else.

I told you that it wasn’t any sort of an appraisal tool, didn’t I?

I wouldn’t share that if it were not the fact that we have already changed how we handle Christmas bonuses. No, we didn’t get rid of them. Since the premise of the bonuses was just to make people feel happier we decided to go further.

Getting goodies is nice – more so if you really need them. Sharing goodies is even nicer – more so if others really need them. So why not allow people to share? This is exactly what we do.

Instead of simply getting your bonus what you’d get is a few sticky notes – each worth the same amount. You can write down a name of anyone working for the company and they’d get the amount added to their bonus.

Of course, you can choose as freely as you want. Choosing yourself is perfectly OK. In fact if someone liked the old way of doing Christmas bonuses they still can have it that way. You can however be as generous as you want and in the way you want. Maybe you want to share because someone really helped you? That’s great! Or maybe it’s just the fact that someone needs money more than others? Awesome!

Despite how we value transparency, we don’t make these decisions public. What is public is the overall results, but not the individual decisions that contributed to the outcome. Why? There’s one reason.

Having the opportunity to share makes it more difficult to take everything ourselves. I make it explicit that it is perfectly OK, yet still some may feel like it’s being a bit selfish. In fact, the only complaint about the bonus system we have is that some of us felt it was easier when we didn’t have to make these decisions explicitly.

That’s a pretty damn low cost for the opportunity to feel like Santa Claus if you ask me.

And obviously it has nothing to do with the common perception of what bonuses are. After all there are many things we do that are anything but what we perceive as a canon of management. That’s all part and parcel of being an exceptional company.

How old were you when you first started falling in love with… Ruby on Rails? How old should you be to start learning programming? Rails Girls Youth shows us that there are no barriers blocking you to learn Ruby on Rails development and you can start very early. WebMuses (who organize Rails Girls events in Krakow) invited 40 girls from 13 to 19 years to a workshop on the 5th of April (but finally the youngest one was as young as 11).

Rails Girls Youth - summary

Check out our poster summarised Rails Girls Youth 2014 edition :)

What was behind that event? It was the first edition of Rails Girls Youth in Krakow, but Rails Girls is a series of workshops, which have been organized around the world (e.g. at Helsinki, Shanghai, Singapore, Tallinn, Berlin, Warsaw) since 2010. The goal is to open up technology and make it more approachable for girls and women. It was the third edition of the event in Kraków.

Lunar Logic has been a partner of Rails Girls Krakow since the beginning. We help with pleasure and when Ania Migas (who works at Lunar, but also is one of the RG organizers) asked who would like to be a coach in the 2014 edition 8 people were interested. It was an amazing event. Why? Below you can view opinions about the workshop from organizers, coaches and also from the coachees perspective:

Ania (WebMuses member & Rails Girls organizer)

WebMuses (myself included) organized Rails Girls for the third time. This year’s edition was different – we decided to dedicate the event to young girls (13-19 years old) who haven’t yet decided what to do with their lives – and it was a great idea! The girls were full of energy and enthusiasm. At least half of the participants of Rails Girls Youth declared that they would love to become programmers in the future. I can’t wait to see that!

From the organizational point of view, the greatest challenge was in choosing the girls, who all should have a chance to attend our event. One of organizers, Przemek, made a simple app that helped us to mark all the entries. Picking applicants wasn’t a problem of organization any more.

Lunar Logic - cakes

Cakes for girls :)

The other thing was finding sponsors, who would like to support our event. The point was to convince companies, that these girls are the future and that, soon or later, some of them would would start looking for the IT-related jobs. Somehow I convinced Lunar Logic to become a sponsor and they helped us buy lunch and tons of delicious cakes for the merry group of people attending the event. ;)

Hopefully, it won’t be the last Rails Girls edition in Krakow. And probably not the last one dedicated to younger girls. Everything went as expected, but we were super tired by the end of the day. Taking care of almost 70 people and making sure that everyone is happy is not an easy task. Tiredness didn’t prevent us from attending the after party which was linked with the WebMuses b-day party though. :)

Coaches point of view:

Hania

This was the fourth time I coached at Rails Girls and was really curious how it will be to work with such young girls. Actually, I was little bit scared too.

But the girls were wonderful, eager to learn new things, centered when needed, and they were catching on really quickly. It was a real pleasure to work with them. They were also more persistent than older girls from previous editions, maybe because they still have their “learning mode” activated :)

It was an excellent idea to organize this event for a younger audience and I hope WebMuses will continue their great work on both motivating and inspiring people.

Tomek

The event was well prepared, all the organizers and coaches took their responsibilities seriously. However all that we, as coaches, received was a guide in how create simple Rails applications and sets of tips from friends who were coaches during previous editions of Rails Girls. We needed to figure out the rest by ourselves.

The girls were mostly middle and secondary high school level students who are used to learning in quite a formal way. This is completely different from how workshops and meetups for programmers usually work.

I think the most important challenge was to convince them that I’m not a teacher but just a colleague who knows how to use this strange Rails thing. This significantly improved the communication and added lots of fun and humour throughout the day we spent together.

I was totally exhausted after the event ended, but satisfied to see that the girls learnt the basics of HTML, CSS and Ruby on Rails. They left with ideas for creating applications and making plans about what to study next.

Maciek

I was very impressed by how well organized the event was. The Rails Girls were working on it since January and everything was prepared down to the last detail. I’m also astonished that so many teenage girls were interested in programming and web development. They made an effort to wake up early in the morning and come to Kraków from far away. All of them were eager to learn. This workshop was a real stereotype breaker.

Grzesiek

I participated in last year’s Rails Girls as an observer where I was helping girls and coaches with their technical issues. The idea of programming workshops seemed so cool that I decided to take part again. This time as a coach and a member of WebMuses.

Grzesiek at Rails Girls Youth

Yeah, the event was not bad.

My group turned out to be one of the youngest. It made me really anxious that my teaching skills wouldn’t suffice and I’d bore them to death. Fortunately, girls were great and understanding. They kept asking questions and pushing for more knowledge. It was really amazing to see their engagement in learning how to code. I really wish I could show you what we built during those 10 hours of hard work (we didn’t manage to deploy it to web :( ).

Just after lunch I ran a small exercise for the girls – Bentobox. The idea was to make them more comfortable reading technical texts and recognize popular technologies. In the end they were presented with a list of 10 tools, libraries and concepts (like PHP, Django, SaaS, jQuery, nginx). The goal was to figure out what each item was.

After a long day of work, all the coaches and organizers received lots of love from the attendees. Many girls mentioned how cool coding, the programming community, and finally, the workshop, are. We heard many statements about becoming a programmer. This was very inspiring and made me feel like I accomplished something big.

Coachees point of view:

Weronika (Paulina’s sister, 15 years old) 

Weronika Materna at Rails Girls Youth

Weronika Materna with her coach and colleague.

An open, friendly atmosphere and a perfect communication with the coach – it’s my one-phrase description of the Rails Girls workshop. Firstly, I had been stressed and was full of doubts, but finally I closed the workshop with a lot of positive energy and openness to the new ideas. I’m happy because of that as my first contact with HTML, CSS and Ruby on Rails was painless and with: such a great atmosphere, crazy positive people, comfortable working environment and… of course – muffins! :D

I strongly recommend Rails Girls workshops everyone, who (such as myself earlier) don’t know how get started.

Julka (Lucek’s daughter, 11 years old)

Julka Odziewa at Rails Girls Youth

Julka was the youngest attendee.

When registering for this year’s Rails Girls, I was extremely curious, and a bit scared too – is this ‘coding thing’ as complicated as it seems to be? Good for me that it wasn’t the first time I had contact with this kind of workshop – I accompanied my mum when she was learning how to code at last year’s RG. But how is it to actually be an active participant? It turned out that it’s a totally different experience – much more interesting! At first everything looked like a Chinese language class to me – I had to deal with these parts of my computer that I have never heard of, not to mention seeing them ever before. But thanks to the great coaches, minute after minute everything was starting to become more clear for me and all the ‘wizardry’ became less mystical than what I had thought before. However, there’s one thing I did not like about Rails Girls – that it ended so quickly :( I can’t wait for the next workshop that I’ll be able to take part in. Thanks for everything and hope to see you again soon!

Why do we go to Ruby conferences? Depending on who you ask, the answer will be different. Among all of them we can distinguish the three most frequently repeated answers: to learn, to share knowledge and to meet people. Certainly, conferences are the best opportunity in which to do all of that. wroc_love.rb was one of them and we took it. There aren’t many Ruby related events happening in Poland, that’s why it was even more tempting for us. On a warm Friday morning, 15 of the Lunar Logic clan departed to Wrocław.

All technical talks were scheduled for the following two days so Friday afternoon was all about project management techniques, programmers soft skills and failed ventures.

During the three days of the conference we had the chance to see 14 presentations. There is no way to tell you about them all in one short blog post. That’s why I’ve chosen the 3 most interesting.

Why should we care about design? What are software boundaries? Which patterns are needed to create a maintainable project? Adam Hawkins answered these questions. Of course fourteen minutes was not enough. That’s why Adam created a series of articles about rediscovering software design.

Another speaker worth mentioning is Markus Schirp who is, with Piotr Solnica, one of the core team members of Ruby Object Mapper. He talked about mutants and how to kill them. Mutant, a gem created by Markus, is a tool for mutation testing.

Of course we have to mention Piotr Szotkowski’s talk about Bogus, a library created by former Lunar employees Adam Pohorecki and Paweł Pierzchała. Bogus is a tool which helps developers write reliable tests ensuring that they don’t stub or mock methods that don’t actually exist in the mocked objects. If you haven’t tried it yet, do it!

Besides other talks, the wroc_love conference continued with Q&A blocks during which experienced developers could share opinions about code metrics and legacy rails apps.

Almost all the talks had something in common. All of them reminds us that Rails is not perfect. Every programmer who, at least, has created a medium-sized Rails app know this. Rails, as a framework, gives us great tools that we can use for fast app building, but when our app popularity reaches a certain level we realise that something is wrong; some things work too slow, there are problems with maintaining app code, and adding new features is a pain. That’s why we, as developers, constantly come up with ways to work around Rails and cope with all these problems.

For some time now we’ve been hearing discussions about the future of Rails. There are so many good ideas but we cannot yet predict if any of it will be part of the next Rails release. Andrzej Krzywda, one of wroc_love.rb conf organizers, created an open document which contains a few ideas worth reading about.

To be honest I can’t say anything bad about the organizational side of conference. Talks were very well prepared, speakers always answered questions from the public and there was hot coffee for all participants. People had a lot of time for networking during breaks or evening parties.

Conferences are great. They make us aware of many problems and at the same time they provide solutions. For me it was also a great motivator. After spending a few days with many people with similar interests, I felt and still feel ready to take on the challenge of improving my skills.

 

PS. wroc_love.rb was also called the 2048 conference ;)

wroc_love.rb conference

#responsivedesign, #2048game

Well, winter didn’t come to Poland (and we hope it never does ;)), but… Paulina managed to come join our team. Last Monday, Paulina started an internship with our company after a pretty long recruitment process in which we tried to find a quality king through a little game of thrones (Grzester introduces the whole process to us in the below). It was the first recruitment for Grzester so we crossed our fingers that his new experience would bless him with a wealth of knowledge, which he can carry into the future. And… Grzester did well and managed to choose a great QA intern. Lets see how it happened!

Grzester’ story:

That was a long journey… an epic journey entitled ‘The recruitment of a QA Intern’. I will try to describe briefly how the whole process unfolded. It was a really good experience, during which we learned a lot. At the beginning of December, I spent some time thinking of how to prepare this task. It wasn’t so easy, mainly because I had never organised recruitment before. After a few discussions with Paweł, we decided to run a two-stage recruitment process.

During the first stage we wanted to check some basic QA skills such as: creativity, consistency, and critical thinking. We decided to prepare a small application packed with tons of issues. The application was called WTS!. It was a quick form which allowed a user to submit rudimentary sales ads. The task was simple, just run some exploratory tests and send us the test results. We received a lot of good exercise solutions. I prepared an answer key, which helped me to validate results and choose the best candidates.

For me the most important elements were testing engagement, curiosity and showing a good understanding of the application.

Grzester notes

An interview at our office was the second stage of the recruitment process. To be honest, it was the first time in my life where I was on the other side of the table. I was the recruiter, not the recruit. I gathered a lot of experience from this.  During the interview, I wanted to understand how the candidates tested the application we prepared, what kind of tools they used and what was the most important for them during testing. I also prepared another short exercise. I asked the candidates a few times to test a Coca-Cola can. If you think that it’s an easy task, go and buy one can. Try test it! The results were really surprising. People were super creative. Throwing the can, checking text on the can, checking internal pressure, checking composition and much more. It was really funny and I am sure that it helped relax the atmosphere during the interview. That was the final stage of our recruitment. Only one thing left to do… to decide who will be the new king or queen… :)

Grzester and Paweł chose Paulina. So lets meet our new QA queen:

PaulinaHi, I’m Paulina :) I study human-computer interaction and applied psychology.

When I found out about the QA Internship program at Lunar Logic I quickly decided to apply. The test application really had many traps, which is why I wrote quite a long report on it. I sent it and kept my fingers crossed. The results came in quickly and I received an acceptance to the next round of recruitment. This was really exciting and I couldn’t wait for the interview.

The recruitment atmosphere was very pleasant and laid-back, so from the moment I entered the front door I started taking a liking to the office. I didn’t think that a job interview could be so relaxed. I spoke a lot about myself and the process of testing the WTS! application.

I’m glad to say that winter didn’t come to Poland and I got an accepted into a really great internship.

Thank you Grzester for bringing Paulina to our company. It was a really successful recruitment  :)

 

Winter is coming… If you are interested in game of thrones a QA position at Lunar Logic, you need to outsmart your opponents apply on our internship page. Then we’ll send you a crow an e-mail with more information.

What’s going to happen next? We will deliver you a sword link to an application, which you should test (in order to find as many traps mistakes as you can, which we left for you in the application).

 

Internship at Lunar Logic

As those of you who follow us more closely know we are not just a software development shop. We cover anything from design through development to testing and if you need a helping hand with shaping your product, we can do that too.

One of the tricky parts of running such an organization is how broad range of stuff we want to learn. While there are plenty of software development-focused events and it’s pretty easy to find an Agile or Lean conference around it gets more difficult with designers and testers.

This is why we couldn’t miss the opportunity to send our representation to Agile Testing Days (AgileTD), which is the biggest event focused on testing in Europe.

Pawel’s Story

For me AgileTD, as pretty much any other conference, was mainly an opportunity to network with people. This is why the fact I had to leave early was a bummer. Nevertheless, I finally met face to face some of the most awesome people in the testing community, like Lisa Crispin, Janet Gregory or Dan North. If nothing else this would make a trip to Potsdam worthwhile.

Not only that. Hallway and evening chats with fellow speakers and attendees were, as always, opportunities to challenge my thinking. I especially liked Lean Coffees that were held by Lisa and Janet early in the morning every day. Want to discuss the real problems and get the insight from practitioners? There’s no better opportunity.

The sessions, as usual, were mixed. There were these that I loved, there were those which I didn’t learn that much from. I’m happy that my presentation on effective teams seemed to stir the discussions and bring some controversy. The worst case scenario would be when the only reaction was “meh…”

The interesting, yet sad, observation is that improvements that are happening in the testing domain seem to slower and less common that those happening around software development. From the patriot’s point of view I’m also sad that only few Poles showed up. If we want to make a difference in software development world our presence there should be more significant.

All in all, an awesome event, with enough great discussions, opportunities to learn and hugs to remember it for a long time. As always it’s all about people and Agile Testing Days brought the right people to one place.

Grzester’s Story

Two months ago I was ask by Paweł, if I want to participate in a agile conference which mainly focuses on testing content. I couldn’t believe in what he said.

-“Agile-testing conference? I thought that something like this doesn’t exist.” I replied. I was convinced that testing on conferences is treated as something peripheral. I typed fast Agile Testing Days 2013 in Google. I was in heaven. Three day conference about testing, three days of exchanging experience with other QA, testers and other people related with testing and agile environment. My answer for Pawel’s question was quick:

-“Of course I want to go there.” 2 months passed really fast….

29 X 2013

After a brilliant weekend in Berlin, at really morning I arrived at venue.  During day one I attended many good talks, but two of them really stand out. The first one, prepared by Tony Bruce “Be a real team Member”. The presentation started out with a small discussion about what it means to be a good team member, then Tony moved very smoothly to Belbin’s study about team members models. But the most important thing for me was the discussion about things which we should do ‘day to day’ to become a good team member: reciprocations, ‘breaking bread’ (e.g: lunch with other employees), asking questions, feedback, listening to others, so obvious but so enlightening.

“Sometimes there are conversations around that you’d like to be part of. Listen to what’s happening around”- T.B

Second talk prepared by Peter Saddington charmed the listeners. Peter asked few simple questions during his talk e.g.:

“Are you having fun?” (in team, in work). If not- why the hell you are doing this?”

These answered for many of my questions related to: team composition, behaviour, effectiveness. Elements of psychology interleaved with reasonable approach made this talk really inspiring.

“Effective leaders should see themselves not as “managers” or even “problem solvers” but as “lovers of people” and “inspiration starters”-  P.S

Day finished with lovely ‘MIATPP Halloween Award Night’ party.

30 X 2013

The organisers scheduled for the second day four keynotes. For me the most interesting was the third one prepared by Dan North: “Accelerating Agile Testing”. At beginning of talk Dan asked two questions:

– “How Agile teams do testing?”

– “How does testing happen?”

During the whole talk Dan prepared nice background to answer the questions above. He very nicely defined: user experience: “User Experience is the Experience a user has.” . Fresh look at _test automation “Don’t automate things until they are boring.”  _in conduction with following thesis:

wrap up DN

created the picture of a very interesting look at Agile Testing yet again.

It was time for Testing Dojo! I had no idea what was going on, so I went to Dojo a bit scared- unnecessarily. Testing Dojo is a place where you can learn from other participants new testing practices, new techniques, discuss with people and give and receive constructive feedback about your skills. During my session we had an opportunity to test in pairs a small ‘Parking Calculator’. You can’t imagine how fun it is to build a communication channel with a stranger in a few minutes and probably you can’t imagine how valuable the received feedback is.

31 X 2013

Day 3rd was marked by me as the FUN DAY! After two days of pure methodology I decided to check out the lighter talks. Therefore my day started with “Natural Born Tester. Are you one?” by Graham Thomas. Great talk about a tester’s nature and predispositions.

I have few questions to you:

“are always in the wrong queue ?”

_– _“are a fool for the promise of the new thing?”

– “are challenged by an unused feature?”

– ” and you do like to play “Lemmings” or “Angry Birds?”

maybe you should consider changing your profession to… tester :)

Later I decided to become a Lab Rat at Test Lab. Try imagine up a situation small, dark room hidden at the end of  the venue, you are open a door and what you see?! LEGO Mindstorms robots with RGB sensors. I was feeling like a kid. You play with LEGO, you log issues in the robot’s behaviour and you get badges.

20131030_170409

Time and space in Test Lab are totally wrapped. Day 3 ended with nice keynote preceded by Lisa’s and Janet‘s performance telling us a story about the ‘dark times’ before Agile was introduced.

Agile Testing Days 2013 was great experience. A very nicely organised conference with wide range of topics and opportunities to meet people from the agile community. If you are thinking about going there next year STOP thinking and just DO IT !

dollyWhen dealing with user-defined data tables, spreadsheet software is an infinite source of UI inspirations. The well known gesture of dragging a handle in the bottom-right cell corner to clone the contents is one them. Now, we bring this functionality to your tables.

#

Avoid repetitive work

Assume you are filling a timetable of sort. You laboriously scheduled all day and then realized that three consecutive weeks are going to be the same. What do you do? It’s obvious – just drag a square handle in the cell you’ve just filled and select all cells that should have the same content. It’s trivial for anyone who has ever seen MS Excel (namely – most of the users out there). And it’s exactly the functionality you can add to your tables with Dolly.js.

Easy to use with any table

Dolly.js is a jQuery UI widget without any other dependencies. It adds the UI behavior, leaving the data logic implementation up to you. Therefore you can use Dolly with any data structure, no matter how complicated it is. And it’s markup-independent so if you aren’t using a semantic HTML table, Dolly can handle it.

Wow, great! Where can I get it?

GitHub: http://github.com/LunarLogic/dolly.js

Live examples and documentation: http://lunarlogic.github.io/dolly.js/

You are also welcome to share your feedback or code contributions with us!

 

Personal Kanban at workYou might remember one of recent posts of Mirek on Pomodoro. It’s only one way of tackling the problem of productivity we use at Lunar. Actually if you asked a completely anonymous no-boss at the office he might say something along the lines:

“Pomodoro? Bollocks! It’s like time boxing your day and expecting that nothing would happen during the time box that requires you attention. And you know what? People keep coming to me interrupting me so if I want to be considered a nice guy, and I do, I can’t just answer: I’m in pomodoro, bugger off, sorry.”

On a more serious note, I like Pomodoro, but my line of work doesn’t really suit the 25 / 5 minute pattern. I do a lot of things that take less than 5 minutes. Only occasionally I’m sucked into something that lasts a couple of hours.

At the same time my primary work is being available for the people when they need me. No matter whether it is a project-related issue, a serious chat on stuff related to employment contract or a game of Dominon. I shall be there when I’m needed.

The problem is people don’t tend to wait with discovering a problem till my pomodoro is finished.

That’s why I prefer a different approach.

Personal Kanban for the win!

Personal Kanban is a simple, but not simplistic, application of Kanban to a personal level. You wouldn’t have guessed, would you? Anyway, the basic two rules are:

• Visualize work

• Limit work in progress

A typical Personal Kanban implementation is a board with three columns: to do, in progress and done. Then there are work items that travel (surprise, surprise) from to do to done. Nothing fancy here so far.

The caveat is the second rule which is about limiting the number of things that are ongoing right now. This changes one’s focus to finish stuff that was started instead of just pulling more and more work from the queue.

Whenever I’m back to my computer, I just glimpse at the board to see whether I’ve been doing something already. In this case I simple come back to this task. In any other case I just pull another work item from my to do column.

Not that easy

One might say: “Whoa! Is it that easy?” Well, actually it isn’t. In fact, this is my second implementation of Portfolio Kanban and I’m doing a few things a bit differently now.

First of all, hand offs are always a problem. I mean, if completing a task requires someone else’s work too you don’t have full control on making it done. If I need Mirek’s feedback on the design of a site of one of our clients I won’t be able to answer the client till Mirek does his part.

That’s why I pay much attention to the way I define tasks for me. I prefer to have very atomic and fully controllable tasks like: ask Mirek for feedback, remind Mirek about pending feedback, remind once again, and only then answer the client. This changes the dynamics of my Personal Kanban board. More on that later.

The second thing is being serious about work in progress (WIP) limits. In this case WIP limits are not only to limit context switching thus directly improve productivity. They are also to help focusing on a single thing.

If something is ongoing I just make it done and can erase the part of my memory that kept reminding me about the task. Actually, I’d prefer to count on my memory, but over the years I’ve learned that it is so sloppy that it can’t be trusted.

The third challenge is to remember to run all the work through the board. Sometimes it really feels awkward to put something on a sticky note despite the fact that it’s going to take three minutes and you are starting to work on that right now.

You start appreciating that once you learn that, for whatever reasons (I blame Murphy’s Law), people tend to interrupt you in the middle of these three-minute long tasks even more frequently than during the bigger ones. This actually helps to avoid situations when something falls under the table just to be found six months later.

Work in progress limits

With this implementation of Personal Kanban, I’m really aggressive when it comes to WIP limits. I use the WIP limit of 1. I can only do this because I define my tasks in a way that makes them blocked very, very rarely, if ever.

At the same time this is so much of a boost to my focus that I prefer to spend a few seconds more when I add stuff to my to do column than later keep thinking about the exact status of tasks in progress.

It shouldn’t be a surprise, as human brains are simply wired in a way that we keep thinking about unfinished business. This property even has its name – it is Zeigarnik effect. Zeigarnik effect simply tells us that we humans tend to interrupt ourselves thinking about stuff that we left incomplete.

Just think – when was the last time when, while doing something completely different, you suddenly thought about that email that you were supposed to answer. It comes completely out of the blue. No inspiration from outside world whatsoever. I’ll tell you a secret – it’s your brain working as it is designed to work.

So basically that’s why I use such an aggressive WIP limit. It means that for the stuff I need to remember not now, but in a couple of weeks, I need some sort of reminders. And you know what? Any calendar app does that just fine.

It’s mobility, you fool!

The tricky part is that I’m very mobile. I don’t even have my own desk. This means that a whiteboard is hardly an option. Well, a classic whiteboard, that is. What I found somewhere in our office dungeons is a tiny 19-inch whiteboard that I can take with me as easily as I can carry my laptop.

So I take it with me whenever I’m moving to a random place of the office.

I’ve tried using the web application for that but it requires way more hassle to interact with it. Not much later, I ended up choosing work items not even looking what was in the app, thus rendering the electronic board totally irrelevant.

The outcomes

It’s not my first time with Personal Kanban, so productivity boost wasn’t a surprise. What was a surprise was the impact of the simple tweaks of defining work and limiting WIP to 1. I feel like I’m accomplishing tons of work. In fact I even see it, because I have to empty my done column every now and then.

Another nice thing is that I have a motivation to keep my backlog empty. You definitely heard zero inbox idea. It’s about not having a single email that you have to do something with. I stopped pursuing this goal long time ago.

However, with Personal Kanban it’s different. It’s like with one of these games where it is just one more game. In my case it is just one more task. They are small anyway.

At the same I believe there was no impact whatsoever on my availability for the whole team. I’m just there. If you need me, I’m happy to take a break because I know that once I’m back, I’m back to work on exactly the same thing as before.

What’s Grinder? It’s a free distributed load testing framework written in Java. It allows you to write test scripts in Jython and Clojure. I’m not going to advertise it nor describe all its features, so for more detailed information go to the Grinder website. In this post I’m going to present load testing web applications with Grinder.

Grinder runs an agent process on a machine which can create/stop worker processes. Each worker process can run tests in many threads. The console is the process that coordinates it and lets you control it using the GUI. The console also gathers some statistics and allows scripts editing and distribution. You don’t need the console to run tests on a single machine. These are the three main elements of Grinder. You can create script from scratch on your own, or use a proxy recorder. I’ll use a proxy to record script and modify it later to fit my needs. I’ll use the most recent version of Grinder which is 3.11, it also supports Jython 2.5.3 pretty wel.

First thing to do is download the proper version of Grinder from https://www.sourceforge.net/projects/grinder. Extract it to some dir – if you want it in your repo as a dependency, then put it in the vendor dir (in case of RoR project). If not, it’s up to you where it will land (just don’t forget the path). To use Grinder 3 you just need to have at least java 6 on your machine.

Next, you probably would like to prepare a test script. You can write it from scratch (which might be quite a tedious task) or use the grinder proxy to record it. The second option is more convenient, at least it was for me. I had a couple of complex scenarios to script. But before you do that prepare a set of small shell scripts which will help you with starting proxy, console or with running a test. The first one named set_grinder_env.sh just sets and exports couple of required variables.

#!/bin/bash

GRINDERPATH=/path/to/grinder/dir
CLASSPATH=$GRINDERPATH/lib/grinder.jar:$CLASSPATH

GRINDERPROPERTIES=/path/to/grinder/properties/file

export CLASSPATH GRINDERPROPERTIES

# add java to PATH if needed
# JAVA_HOME=
# PATH=$JAVA_HOME:$PATH
# export PATH

If Grinder is added to the CLASSPATH, then you can start recording a proxy. I’ve also created a small script, as recommended in the starting guide on the Grinder website.

#!/bin/bash

. ./set_grinder_env.sh
java net.grinder.TCPProxy -console -http > grinder.py

Recording proxy is started with a -console option which displays a control window that allows terminating a proxy process cleanly, -http option enables filters which allow the recording HTTP traffic. By default proxy sends output to terminal, so it’s redirected to grinder.py file. Proxy listens on port 8001. You can record additional headers if the basic set is not enough for you – go to the Grinder additional headers section for a description.

Before recording script you need to configure your browser to use this proxy. In recent Firefox (v. 23) it’s Edit -> Preferences -> Advanced -> Network -> Settings.

Now the script can be recorded. Let’s say that you want to simulate 100 users signing in to your application. After starting the proxy just point your browser to your application and sign in as some existing user. You may do some more actions, it depends on the scenarios that you want to automate. When you finish recording, terminate the proxy process using the console window. This guarantees proper saving of the script. You should have something like that in the grinder.py file.

# The Grinder 3.10
# HTTP script recorded by TCPProxy at 2013-08-22 15:23:15

from net.grinder.script import Test
from net.grinder.script.Grinder import grinder
from net.grinder.plugin.http import HTTPPluginControl, HTTPRequest
from HTTPClient import NVPair
connectionDefaults = HTTPPluginControl.getConnectionDefaults()
httpUtilities = HTTPPluginControl.getHTTPUtilities()

# To use a proxy server, uncomment the next line and set the host and port.
# connectionDefaults.setProxyServer("localhost", 8001)

# These definitions at the top level of the file are evaluated once
# when the worker process is started.

connectionDefaults.defaultHeaders = \
[ NVPair('Accept-Encoding', 'gzip, deflate'),
NVPair('Accept-Language', 'en-US,en;q=0.5'),
NVPair('User-Agent', 'Mozilla/5.0 (X11; Linux i686; rv:23.0) Gecko/20100101 Firefox/23.0'), ]

headers0= \
[ NVPair('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ]

headers1= \
[ NVPair('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
NVPair('Referer', 'http://example.com/'), ]

url0 = 'http://example.com:80'
url1 = 'http://example2.com:80'

# Create an HTTPRequest for each request, then replace the
# reference to the HTTPRequest with an instrumented version.
# You can access the unadorned instance using request101.__target__.
request101 = HTTPRequest(url=url0, headers=headers0)
request101 = Test(101, 'GET /').wrap(request101)

request102 = HTTPRequest(url=url0, headers=headers0)
request102 = Test(102, 'GET favicon.ico').wrap(request102)

request201 = HTTPRequest(url=url1)
request201 = Test(201, 'GET 994ba82a').wrap(request201)

request301 = HTTPRequest(url=url0, headers=headers1)
request301 = Test(301, 'POST user_sessions').wrap(request301)

request302 = HTTPRequest(url=url0, headers=headers1)
request302 = Test(302, 'GET home').wrap(request302)

request401 = HTTPRequest(url=url1)
request401 = Test(401, 'GET 994ba82a').wrap(request401)

class TestRunner:
"""A TestRunner instance is created for each worker thread."""

# A method for each recorded page.
def page1(self):
"""GET / (requests 101-102)."""
result = request101.GET('/')

grinder.sleep(183)
request102.GET('/favicon.ico')

return result

def page2(self):
"""GET 994ba82a (request 201)."""
self.token_a = \
'127127'
self.token_be = \
'19098'
self.token_qt = \
'14'
self.token_ap = \
'532'
self.token_dc = \
'678'
self.token_fe = \
'1917'
self.token_to = \
'blah'
self.token_v = \
'42'
self.token_jsonp = \
'NREUM.setToken'
result = request201.GET('/994ba82a' +
'?a=' +
self.token_a +
'&be=' +
self.token_be +
'&qt=' +
self.token_qt +
'&ap=' +
self.token_ap +
'&dc=' +
self.token_dc +
'&fe=' +
self.token_fe +
'&to=' +
self.token_to +
'&v=' +
self.token_v +
'&jsonp=' +
self.token_jsonp)

return result

def page3(self):
"""POST user_sessions (requests 301-302)."""

# Expecting 302 'Found'
result = request301.POST('/user_sessions',
( NVPair('utf8', '✓'),
NVPair('authenticity_token', 'token'),
NVPair('user_session[login]', 'user'),
NVPair('user_session[password]', 'password'),
NVPair('commit', 'Login'), ),
( NVPair('Content-Type', 'application/x-www-form-urlencoded'), ))

grinder.sleep(55)
request302.GET('/home')

return result

def page4(self):
"""GET 994ba82a (request 401)."""
self.token_be = \
'2156'
self.token_qt = \
'19'
self.token_ap = \
'1095'
self.token_dc = \
'21345'
self.token_fe = \
'22795'
self.token_to = \
'blah'
result = request401.GET('/994ba82a' +
'?a=' +
self.token_a +
'&be=' +
self.token_be +
'&qt=' +
self.token_qt +
'&ap=' +
self.token_ap +
'&dc=' +
self.token_dc +
'&fe=' +
self.token_fe +
'&to=' +
self.token_to +
'&v=' +
self.token_v +
'&jsonp=' +
self.token_jsonp)

return result

def __call__(self):
"""Called for every run performed by the worker thread."""
self.page1() # GET / (requests 101-102)

grinder.sleep(1780)
self.page2() # GET 994ba82a (request 201)

grinder.sleep(13973)
self.page3() # POST user_sessions (requests 301-302)

grinder.sleep(85)
self.page4() # GET 994ba82a (request 401)

def instrumentMethod(test, method_name, c=TestRunner):
"""Instrument a method with the given Test."""
unadorned = getattr(c, method_name)
import new
method = new.instancemethod(test.wrap(unadorned), None, c)
setattr(c, method_name, method)

# Replace each method with an instrumented version.
# You can call the unadorned method using self.page1.__target__().
instrumentMethod(Test(100, 'Page 1'), 'page1')
instrumentMethod(Test(200, 'Page 2'), 'page2')
instrumentMethod(Test(300, 'Page 3'), 'page3')
instrumentMethod(Test(400, 'Page 4'), 'page4')

The next step is to parametrize and clean this up a bit, so it will be more readable and easier to maintain. I removed requests, methods and constants referring to them that were done to external services. I also got rid of requests for images, stylesheets etc. I also corrected the methods’ name and requests’ numbering. Now each request had a separate page method which was separately instrumented where statistics were gathered for each of them. I also added a couple of helper methods for: loading users from file, getting a random user, writing response to file (sometimes helps in debugging your test). After all the modifications the script looked more or less like this:

# The Grinder 3.10
# HTTP script recorded by TCPProxy at 2013-08-22 15:23:15

from net.grinder.script import Test
from net.grinder.script.Grinder import grinder
from net.grinder.plugin.http import HTTPPluginControl, HTTPRequest
from HTTPClient import NVPair
import re
import time
import sys
import random

connectionDefaults = HTTPPluginControl.getConnectionDefaults()
connectionDefaults.useContentEncoding = 1
httpUtilities = HTTPPluginControl.getHTTPUtilities()

# To use a proxy server, uncomment the next line and set the host and port.
# connectionDefaults.setProxyServer("localhost", 8001)

# These definitions at the top level of the file are evaluated once
# when the worker process is started.

connectionDefaults.defaultHeaders = \
[ NVPair('Accept-Encoding', 'gzip, deflate'),
NVPair('Accept-Language', 'en-US,en;q=0.5'),
NVPair('User-Agent', 'Mozilla/5.0 (X11; Linux i686; rv:23.0) Gecko/20100101 Firefox/23.0'), ]

headers0= \
[ NVPair('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ]

headers1= \
[ NVPair('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
NVPair('Referer', 'http://example.com/'), ]

url0 = 'http://example.com'

# Create an HTTPRequest for each request, then replace the
# reference to the HTTPRequest with an instrumented version.
# You can access the unadorned instance using request101.__target__.
request101 = HTTPRequest(url=url0, headers=headers0)
request101 = Test(101, 'GET /').wrap(request101)

request201 = HTTPRequest(url=url0, headers=headers1)
request201 = Test(201, 'POST user_sessions').wrap(request201)

request301 = HTTPRequest(url=url0, headers=headers1)
request301 = Test(301, 'GET home').wrap(request301)

# constants needed in test
password = 'test_password'
usernames_file = 'path/to/usernames/file'

def load_file(filename):
file = open(filename, 'r')
list = []

for line in file.readlines():
list.append(str(line.strip()))
file.close()

return list

usernames = load_file(usernames_file)

class TestRunner:
"""A TestRunner instance is created for each worker thread."""

# A method for each recorded page.
def page1(self):
"""GET / (requests 101)."""
result = request101.GET('/')

return result

def page2(self):
"""POST user_sessions (request 201)."""

login = self.get_random_user(usernames)

# Expecting 302 'Found'
result = request201.POST('/user_sessions',
( NVPair('utf8', '✓'),
NVPair('authenticity_token', 'token'),
NVPair('user_session[login]', login),
NVPair('user_session[password]', password),
NVPair('commit', 'Login'), ),
( NVPair('Content-Type', 'application/x-www-form-urlencoded'), ))

return result

def page3(self):
"""GET / (requests 301)."""
result = request301.GET('/home')

return result

def get_random_user(self, users_list):
user = random.choice(users_list)

return user

def write_to_file(self, response, path):
file = open(path + str(time.time()) + ".html", "w")
print >> file, response
file.close()

return

def log(self, message):
grinder.logger.info(message)

return

def __call__(self):
"""Called for every run performed by the worker thread."""
self.page1() # GET / (request 101)

grinder.sleep(1780)
self.page2() # POST user_sessions (request 201)

grinder.sleep(13973)
self.page3() # GET /home (request 301)

def instrumentMethod(test, method_name, c=TestRunner):
"""Instrument a method with the given Test."""
unadorned = getattr(c, method_name)
import new
method = new.instancemethod(test.wrap(unadorned), None, c)
setattr(c, method_name, method)

# Replace each method with an instrumented version.
# You can call the unadorned method using self.page1.__target__().
instrumentMethod(Test(100, 'Page 1'), 'page1')
instrumentMethod(Test(200, 'Page 2'), 'page2')
instrumentMethod(Test(300, 'Page 3'), 'page3')

Before you run the test, there’s one thing that you have to do and that is to prepare the properties file. It is the file that tells grinder which script to run plus how many processes and threads to create, how many times run the script, how long are the sleep times and much more. You can even set a path to a different jython version there. I had to set the grinder.jvm.arguments property to stop Grinder complaining about caches. I also turned off the console. In my test I wanted 1 process to be created and 3 threads that will be repeating the login scenario 5 times (of course each thread will run it 5 times). initialSleepTime property says each thread to wait before start random number between 0 and 10000 ms. All times are specified in milliseconds.

Grinder gives you many ways of configuring creation of processes. You can run couple of processes on different machines. Processes on different machines can create different number of threads (each load injector has different properties file). All processes might be started at the same time or you can set how many processes will be started after every period of time, even how many of them will be initially created. Here’s my grinder.properties file.

grinder.script = grinder_test.py # path to test script
grinder.processes = 1 # number of worker processes each agent starts, default 1
grinder.threads = 3 # threads for each process, default 1
grinder.runs = 5 # number of iterations, default 1, when 0 if console started, then run until the console sends a stop or reset signal

# grinder.consoleHost = consolehost
# grinder.consolePort # default 6372

grinder.logDirectory = log/grinder # default agent's working dir
grinder.numberOfOldLogs = 3 # default 1

# grinder.hostID = myagent # overrides "host" string used in log filenames and logs
# grinder.logProcessStreams = false

grinder.initialSleepTime=10000 # default 0, random number between 0 and value, each thread will wait before start
# grinder.sleepTimeFactor=0.01 # default 1, applies to all sleep times in scripts and property file (divide by that value)
# grinder.sleepTimeVariation=0.005 # default 0.2, if the sleep time is specified as 1000 and the sleepTimeVariation is set to 0.1, then 99.75% of the actual sleep times will be between 900 and 1100 milliseconds

# grinder.processIncrement = 1 # default start all together, if set the agent will ramp up the number of worker processes, starting the number specified every grinder.processesIncrementInterval
# grinder.processIncrementInterval = 10000 # default 60000ms
# process.initialProcesses = 1 # default grinder.processIncrement, sets the initial number of worker processes to start
# grinder.duration = 60000 # default forever, how long process should run
# grinder.debug.singleprocess = true
# grinder.dcrinstrumentation = true

# grinder.jvm = # for alternate jvm, default java
# grinder.jvm.classpath =
grinder.jvm.arguments = -Dpython.cachedir=/tmp/jython # you can set here jython cachedir, or alternate jython version to use

grinder.useConsole = false # default true
# grinder.reportToConsole.interval = 100 # default 500ms
# grinder.reportTimesToConsole = false # default true, http://grinder.sourceforge.net/faq.html#timing

Finally you can run the test. Use that bash script, to do it in an easier way. Just remember to create all the necessary test data, like users, etc…

#!/bin/bash

. ./set_grinder_env.sh
java net.grinder.Grinder $GRINDERPROPERTIES

The results you will see on your screen summary report will look like this.

2013-09-19 16:15:46,916 INFO myhost- thread-: finished 1 run
2013-09-19 16:15:46,917 INFO myhost- : elapsed time is 12223 ms
2013-09-19 16:15:46,917 INFO myhost- : Final statistics for this process:
2013-09-19 16:15:46,938 INFO myhost- :
Tests Errors Mean Test Test Time TPS Mean Response Response Mean time to Mean time to Mean time to
Time (ms) Standard response bytes per errors resolve host establish first byte
Deviation length second connection
(ms)

(Test 100 1 1255,00 ,00 ,08 ,00 ,00 ,00 ,00 ,00) "Page 1"
Test 101 1 1254,00 ,00 ,08 16831,00 1376,99 10,00 139,00 1246,00 "GET / (requests 101)."
(Test 200 1 901,00 ,00 ,08 ,00 ,00 ,00 ,00 ,00) "Page 2"
Test 201 1 332,00 ,00 ,08 16828,00 1376,75 10,00 139,00 329,00 "POST user_sessions (request 201)."
(Test 300 1 176,00 ,00 ,08 ,00 ,00 ,00 ,00 ,00) "Page 3"
Test 301 1 176,00 ,00 ,08 96,00 7,85 10,00 139,00 175,00 "GET / (requests 301)."

It shows the number of successful tests and errors. Number of all executed tests = tests number * processes number * threads number * runs number. Total, mean, standard deviation for each of the tests is calculated only for successful ones. All data is saved to log files. Grinder organizes log files in a way, that each process saves data to a separate file called $hostname-$process_number.log. Grinder also gathers data about test executions in files called $hostname-$process_number-data.log.

When running a test with the console turned on, you’ll see a window similar to this:

Grinder does not generate any graph files. You can do it with other tools, that are listed on the Grinder’s links page, like Grinder Analyzer, or Ground Report. Thanks for reading and let me know how grinding your app went!

Lean Coffee Lunar LogicAll-company meetings are boring. Especially when you’ve got to listen to a manager’s rant about yet another robustly streamlined leveraging out of the box. Gosh.

Thankfully, we’ve got no managers. We do have company meetings, though. Once a week, to be precise. It’s called the lean coffee, even though it’s not about watered-down coffee.

Meetings democratised

We gather in the so-called sofa room every Wednesday at noon, making ourselves comfortable in the omnipresent bean bags. We work on two floors, so it’s a good opportunity to meet people you seldom see.

If you want, you may write a topic on the whiteboard, which we’ll vote on after everyone’s in place. Paweł (the no-boss) counts the votes then and we start discussing the most popular topic.

We talk about a variety of issues. We’re a transparent company, with all its pros and cons, so more often than not we talk the upcoming projects over. Or the very sales process. The lean coffee is not dead serious, so there are often questions concerning the hair colour of some people (hint: ginger) or boss’s height (hint: stilts).

Time saved

Lean coffee is timeboxed for 30 minutes, with up to 8 minutes per topic (it can be prolonged to 12 minutes if need arise). Thus, it’s exactly one pomodoro long. A half an hour is just enough for you to focus and not get tired or bored.

We’re not very strict – the routine may break, especially during hot, lengthy days or when there’s an ultra important topic on the plate. The lean coffee isn’t compulsory, so if you’ve got something big (e.g. a product shipment) to do, no one will drag you to the sofa room.

Folks entertained

Lean coffee is fun. Most of the people usually attend it, passionately interested in the company life. It’s weekly, so we don’t repeat the topics and there’s always something new to listen to. Especially when comfortably reclining in a bean bag.

 

Lean Coffee is sometimes called Open Coffee and it’s an open weekly event for discussing all sorts of matters, e.g. Open Coffee KrakówMore about Lean Coffee from the Limited WIP Society.

pomodoro cow timer

Distractions. Distractions everywhere! Social media. Smartphones. Google Glass coming up. And you just want to get things done.

And the biggest culprit – the office itself. Fancy a coffee? Let’s go for a lunch? Have you seen the rocket dog?

Screw you. Pomodoro in progress.

Short bursts of furious work

Work for 25 minutes. Relax for 5 minutes. Repeat. Simple?

That’s how some of us, including myself, work at Lunar Logic. We set kitchen timers and  work according to the Pomodoro technique‘s rules. ‘Pomodoro’ means tomato in Italian and the name comes from the shape of the first kitchen timer its creator used.

We’re laser-focused and distraction-proof.

A pomodoro a day keeps the doctor away

But why use some fancy technique and not just sit and slam at the keyboard?

My previous boss once said that programmers should work 5 hours a day at most. Strange? Might be. Our industry values quality over quantity – one working feature is always better than three broken ones. And you work way better when you’re rested (#cptobvious).

Pomodoro technique can make that happen. If you think about the actual working time, it’s ‘just’ 12 pomodoros. Compare 5h spent on working features with 3h of short breaks in between TO 8h of chit-chatting, drinking coffee and intermittent coding.

Next will be better!

That doesn’t mean you shouldn’t try doing more work sessions – I remember noting last Friday that I could do more than 10 pomodoros. By finishing my day with a work session, for example. After all, you shouldn’t worry about having a break after you leave the office, should you?

You’ll also notice how emotionally rewarding working in pomodoro is. You Get Things Done (R). Stop wasting time. Get rid of that soul-eating feeling of guilt when a day passed and you haven’t produced anything palpable. The sun starts shining. People smile at you. Oh, wait…

The Good, the Bad and the Pomodoro

This technique has some possible drawbacks. Firstly, it’s not for everyone. I’m not a programmer and express myself in words, not code. I found it hard to yield and harness my time into little time boxes.

Secondly, there’s a problem with the “bigger” breaks, i.e. the 15-30 min after every fourth pomodoro. It’s not that easy to have a dinner under 30 min, especially if you cook by yourself. One possible solution is to devote every fourth pomodoro – preferably the 8th one, for cooking. The 25 min + 30 min of a break should suffice. After all, making a dinner is a productive activity, isn’t it?

Pomodoro is flexible, yet working for 10 and relaxing for 5 minutes isn’t the best idea. I’ve started with 20/5 and soon moved up to 25/5, because I just felt that I had too much free time! I sometimes have shorter breaks – 3 minutes, when I really feel I’m in the zone. However, skipping the break session doesn’t pay off – you’ll most probably feel tired and less productive during the next pomodoro.

One thing about the kitchen timer: I jumped up in my chair a few times when it rang. Make sure that you and your coworkers don’t… or get a digital timer.

You rule.

With Pomodoro technique, you’re the master of your fate. The tick of the clock means efficiency, not running out of time. You can stretch pomodoro over whole teams, use it to work or study.

So… how many pomodoros have you done today?

P.S. Of course, there is much more to this technique, including information on braving the distractions, to-do lists and whatnot… see for yourself! And check this post about Ping Pong Pomodoro Pair Programming by Adam Pohorecki, who used to work at Lunar Logic and pioneered the technique here :)

Rails ButtonSince I haven’t discovered any IDE satisfying my needs for Rails development, I have had to set up my work environment by myself.

This is a good thing because I can tune it to my needs and… a bad thing because, to be efficient, this personal IDE requires firing up several apps and/or issuing a few commands before actually starting to code.

At the very least you need 2 terminal windows (for command line and rails server), a browser window running localhost:3000 and a text editor of your choice with the app project folder opened and ready to code.

Additionally, it would be nice to have a terminal window with automated test suite (e.g. guard) and a rails console, should you need to foolproof some code on the side.

All this means writing many commands and/or clicking here and there every time you are about to begin coding, which in my case happens every single morning.

How about setting this all up in a push of a button?

Here is the recipe guiding you to get this working on Ubuntu w/gnome, bash, chromium, rvm and Sublime Text 2. I assume you already have your rails app folder, cloned from github, located in the path-to-your-project dir.

  • Make sure you have Ubuntu w/gnome, rvm, and chromium installed
  • Make sure you can start Sublime Text from command line using subl
  • Download the bash script and put it somewhere (this guide assumes: /opt/bin/dev_start.sh)
  • Add execution rights chmod o+x /opt/bin/dev_start.sh
  • Edit your ~/.bashrc file and add following function at the very bottom: <div class="codecolorer-container bash blackboard" style="overflow:auto;white-space:nowrap;width:100%;">
    1
    2
    3
    4
    5
    6
    7
    8
    9
    function dev {
    if [ "2" -ne "$#" ]; then
    echo "Usage: dev start|stop Project-folder-path" && return 1
    elif [ "start" == "$1" ]; then
    /opt/bin/dev_start.sh $2
    else
    echo "$1 method not implemented"
    fi
    }

    </div>

  • terminal-configSource .bashrc file . ~/.bashrc or logout and login to shell
  • Create and configure gnome terminal profiles named Rails-server, Rails-guard, Rails-console. Remember to check option: Run command as a login shell, otherwise RVM may behave strangely. You can experiment with other options, you can see my preferred setup for reference.
  • Additionally you can configure terminal titles and color schemes which are a nice-to-have feature as you can tell where you are at a glance.
  • You’re almost done! Run dev start path-to-your-project-dir from your terminal window and… voila!
    • Wait a few seconds and… start coding!

    Having all this set up enables me to enjoy my morning espresso 1 minute longer :)

    Experiment with this simple script and feel free to post any comments with improvement tips / porting to other platforms (any mac users here?) etc.

    Google is an internet behemoth, isn’t it? A single G team is bigger than whole Lunar. And it’s Google. They know everything. Yet they’re still interested in smaller companies than theirs.

    We’ve met a gang of Google’s Product Manager freshmen last Friday. They came to our office to learn how Lunar Logic works. Why?

    Imagine that you finish your college and get employed by Google. Straight away. As a product manager.

    That’s more or less what the Associate Product Manager programme is about. Google wants young people unsullied by the traditional management approach. You’ve got to excel at maths etc., but you get loads of perks. One of them is travelling across the globe to visit other software companies. To learn & compare their approach with that of other players. Simple!

    Google’s management freshmen learned about our flat structure, resulting in having only one PM of our own (Paweł Brodziński). Also being a scrum master and CEO as well.

    We’ve told them about our smashing parliament watch app that never got released due to a revolution in a certain middle eastern country… that was quite philosophical, actually!

    And we found out that Google is more technocratic than it seems. It manifests in the technical people having more to say in product development than e.g. marketing depts. Which doesn’t mean they’ve got nothing to say, of course.

    It’s a pity we didn’t have more time, for the visit was very inspiring. Hope we get more of such in the future :)

    If you’re working on a Rails app, let’s do a quick test: check how many files you have in the lib directory of your project. If you have less than 10, it must be a fairly new project. Most projects I’ve seen have at least 20-30. In our current project we had over 200 of them a few months ago…

    The problem with the lib directory is that there are no official guidelines of what we should do there. Rails has a well-defined directory structure, which is great because you don’t have to think of how to organize your code since all the directories covering controllers, models, views, etc… are set up from the beginning. The downside is that once you start creating files that don’t fall into any of the predefined categories, it’s hard to decide what to do with them. So they usually end up in lib, which becomes a real mess over time.

    I started searching for a solution – I asked on Twitter and looked for any relevant blog posts. The best idea I found was the one presented by Bryan Helmkamp:

    I recommend any code that is not specific to the domain of the application goes in lib/. Now the issue is how to define “specific to the domain”. I apply a litmus test that almost always provides a clear answer: If instead of my app, I were building a social networking site for pet turtles (let’s call it MyTurtleFaceSpace) is there a chance I would use this code?

    That makes perfect sense: the app directory is for your app’s code, as the name implies. All of it – not just assets, controllers, helpers, mailers, models and views, just because these are the subdirectories that Rails creates by default. You can always make your own.

    So what exactly did we find in our lib directory?

    Resque jobs

    If you use any background queue library such as Delayed Job or Resque, you probably have a collection of classes for the tasks performed in the background. In our project we have a whole directory tree consisting of Resque job files – guess where we had to put them? In lib of course, specifically in lib/queues.

    But the background jobs are an integral part of the app, they perform work that would normally be done by controllers or models, and usually call in some other models, except they do it asynchronously. So, according to the rule mentioned above, they should be put somewhere inside the app directory. Of course there was no pre-configured directory where we could put them all, but that doesn’t mean we can’t add one, and that’s what we did – so the Resque classes were moved to app/jobs.

    Decorators & presenters

    We also have quite a large directory with decorator or presenter classes, which we have put in to app/decorators. These are all support classes for rendering views that present models to them in some way; most of the time it’s about rendering model data to JSON which is used by API controllers. We also have some presenters used by standard HTML views, which organize records into some specific type of hierarchy that a particular partial requires, in order to move the logic out of ERB partials (or models).

    Concerns

    Concerns are with modules that are intended to be included in other classes and are used for sharing code between a group of related classes, e.g. models or controllers. This concept was popularized by DHH and 37signals and is now officially supported in Rails 4 – Rails now auto-generates app/controllers/concerns and app/models/concerns directories and adds them to the autoloading list.

    This feature is a bit controversial and there are some people who say that there’s no place for such thing in good Rails apps. I disagree – I think concerns are OK unless they’re overused.

    If you have 200-lines-long concern modules that are included everywhere, or if you have as many concerns as you have models, you’re probably doing something wrong – any bigger and isolated pieces of functionality from concerns should be extracted to some separate, independent classes. But if you have a few short modules with just a couple of short methods each, that doesn’t do any harm at all and can make your code shorter and more DRY. One example could be when a few models have a field with the same name (e.g. enabled) and you want to share some setters, scopes or finders that deal with that field, which would look exactly the same in all of those models.

    The concerns support added in Rails 4 is really just a change to generators and the autoloading list, so you can add this yourself easily to your Rails 3 apps. We’ve also added app/decorators/concerns which holds a couple of shared modules used by presenters.

    Services

    Another non-standard directory that we’ve added is app/services, in which we keep a group of classes and modules that don’t belong to any other category such as models or presenters, but are still specific to this project’s domain. This is a pretty broad category, though not as broad as the old lib directory (right now we have about 40 files and directories in it). Some services are classes that you make instances of, some are modules that you use directly and some are whole directories of a few cooperating classes grouped in a namespace.

    The name “service” comes from a pattern called Service Object which is gaining some popularity in the Rails community recently, and it’s basically about extracting pieces of specific functionality that would be normally written in a model or a controller to a separate class or module. The reason is that if you put everything that a user can do into the User model, as is often the case at the beginning of a project, this model can grow to hundreds or thousands of lines of code over time (user.rb is often the most complex class in a Rails project).

    Models

    A few files from lib actually ended up in the app/models directory. The purpose of app/models seems to be clear, until you start thinking about it: what is a model really? Is it just for subclasses of ActiveRecord (or ROM/Mongoid/etc.), which represent tables in your database, or is it for all classes which are part of your model layer, regardless of how they’re implemented? At what point it’s not a model anymore, but rather a service?

    We’ve decided not to restrict app/models only to ActiveRecord models, but instead just use common sense to decide what should go there. Some rough guidelines that we’ve used were:

    • AR models go to models
    • classes whose purpose is to store and fetch data from Redis structures also go to models – after all, the only difference from AR models is that they use a different storage backend
    • classes that are mostly about storing and accessing data, validations, calculations etc. should rather go to models
    • classes that are about interaction, doing, changing or sending something (often with names like “Creator”, “Handler”, “Uploader”, etc…) go to services
    • groups of classes in a namespace usually go to services (e.g. Auth module that implements various kinds of authentication or an ABTesting module which handles A/B tests)

    Monkey patches

    The Ruby open classes feature that allows you to monkey-patch other people’s code is a great thing. Even though it’s considered dangerous, it’s often hard to resist because it’s so easy and gives you power to change anything you want. We also had a bunch of monkey patches for other classes scattered over the project, mostly somewhere in lib and in config/initializers. Now we’ve moved them all to lib, divided into two groups.

    The first one, lib/ext, is for extensions to core Ruby classes such as Array or String. There aren’t a lot of these, but sometimes you really want that core class to have that particular method instead of having to wrap the objects with something. The other directory is lib/hacks and it’s meant for, well, hacks; the files that should ideally not exist at all, and will hopefully be removed in the future, but for now they have to be there because the world is not perfect and sometimes you just have to hack something to make it work. At least this way we clearly see how many of those we have – this is similar to the idea of having a shame.css stylesheet that keeps all the ugly CSS you aren’t proud of, isolated from the rest of the code.

    What stays in lib

    The lib directory should ideally contain only those files that are generic and reusable (those that could be useful in the turtle social networking site). Things like a basic interface to some web service, database tools, asset processors or compressors definitely belong there. On the other hand, things that use words like “user”, “game”, “event” or any other words that appear in your model names probably don’t belong in lib.

    Here are my guidelines for a proper lib file:

    • it should not access any of your models, services or anything else from the app in any way
    • it can only access other libs, Ruby core libraries or stuff from gems
    • it should not rely on any global variables, constants or ENV variables to be defined
    • it should be easily extracted to a gem and put on GitHub and RubyGems without too much effort

    If it needs some small amount of project-specific configuration (e.g. an API key for a web service), make it possible for external configuration, just like you’d do if you wanted to put it in a gem, e.g.:

    # lib/twitter_poster.rb

    module TwitterPoster
    mattr_accessor :api_key

    end
    # config/initializers/twitter.rb

    TwitterPoster.api_key = 'qwerty'

    At the moment we have just 46 Ruby files in lib, including the extensions and hacks.

    Other approaches

    This list is clearly not a complete solution that covers every possible case in every Rails project; every project is different, each has a different set of features and is built slightly differently. Not everyone agrees that the Rails conventions are something that you should try to stick with, and some people in the community argue that you should build your whole app separately from Rails and only use the app directory to build an interface between Rails and the core of your app. I wouldn’t go that far, but I’d agree that you should only treat the default directory structure as a starting point and then modify it according to your needs – where you end up will depend on your application’s complexity, architecture and your team’s preferences.

    ¡Hola! I saw the future of the web and it was mobile, user-centered and fun to develop. I also saw Barcelona and it was loud, sunny and amazing, but you don’t want to read about that, right? So let me share some of my thoughts after WebVisions and a few ideas that seem most important to me as a web developer.

    Park Güell

    It seems like by now everyone knows the importance of responsive design. This is an old idea. Yet a lot of developers – and clients – just don’t care enough. Mobile devices are still something to consider only after the desktop version of a web app is ready and rolling. Contrary to common sense, responsive web design still isn’t the default way of doing things. Why? Let it finally be the rule, not just a feature.

    Both from a materialistic, business-like point of view and from a moral, open-sourcey perspective, taking mobile devices into consideration is worth it. As Dave Shea pointed out, it’s super-safe to say that mobile design increases sales 2 times. In fact it is usually much more, closer to 3, 4 or even 5 times. But also – think of everyone, think of Africa, target new markets where there’s just no good desktop base, but a pretty good mobile infrastructure is emerging (Chris Heilmann).

    Yes, creating responsive layouts is hard, and that’s why new cool web standards are being developed to make our work easier and more fun. At WebVisions we could hear about them straight from the friendly hardcore-looking blokes that are making it happen.

    For example, check out flexbox. It’s only partially supported at the moment, still difficult to use and may hit performance, but man, it will rock when it’s sorted out! Imagine easily reordering and resizing elements with a few lines of CSS instead of resorting to time-consuming workarounds and dirty hacks.

    WebVisions Venue - CCCB

    We know that devices lie – that’s why we have to remember about meta viewport. But like Bruce Lawson said, forcing the way of presentation in HTML is what Satan does and that’s why there’s a proposal of a @viewport CSS rule.

    Soon we will be able to use much more precise media queries, such as @media (hover) for detecting if a device can hover, or @media (pointer: none/fine/coarse) that would tell you whether a device has a fine mouse pointer or rather limited pointing accuracy, as with touch screens.

    And how cool are viewport units? Very cool. And you can almost use them :)

    Just one more thing: FirefoxOS is out and if you haven’t heard about it yet, you should read why Chris Heilmann is so psyched about it. It’s awesome because there are no native apps – they are written in the same web technologies we use and love. This new system’s goal is to provide high quality experience while being accessible to everybody, with regard to both users and developers.

    Here are some beautiful, beautiful slides from the conference that are totally worth checking out:

    The crucial part of running a learning organization, and this is definitely one of key principle we live by in Lunar Logic, is actively looking for new ideas and new sources of inspiration. Of course we don’t shy away from adopting proven techniques or improve our current practices but to stay a cutting edge company we need fresh ideas every now and then. This is the fuel of our organizational evolution and as we know

    It is not necessary to change. Survival is not mandatory.

    W.E. Deming

    That’s why we’ve sent our whole management team (OK, it’s only me and Paul) to Kanban Leadership Retreat. For those of you unfamiliar with the event it’s an unconference for leaders of Lean Kanban community that is held twice a year. You can find my impressions after the first edition here.

    The unconference format is tricky. I’ve seen many of such events going terribly wrong, failing to attract enough people or deliver value for those who attend. Not this time. Kanban Leadership Retreat (or klrat) is a safe bet. One can be 100% sure that the right people will show up.

    And when I say the right people I mean that you can learn from every single attendee. Virtually everyone. People would bring their own unique experiences to find patterns or coherence with the experience of others. They’d share their genuine ideas to validate them or bring them to a new level. There’s no default assumption that someone is right and there’s agreement to disagreement.

    If it wasn’t enough you’d meet people like Troy Magennis who’d blow your head out in 15 minutes showing you how your assumptions on e.g. estimation are totally wrong. And it’s not only a chance to sit on Troy’s session. It’s a chance to spend hours with him discussing your own context and problems.

    You’d meet awesome fellows from TLC (probably the only company on planet Earth that could steal me from Lunar right now): Jabe Bloom and Simon Marcus. It’s a chance to exchange experiences with other people sharing your mindset who also work in the trenches and aren’t afraid of getting their hands dirty.

    You’d have a chance to bounce off your ideas of people are interested in the same subjects as you are. In my case this role was neatly played by Andy Carmichael (among others) as he shares my passion toward Portfolio Kanban.

    Finally, you’d be able to challenge the very leaders of the community whenever your experience is not aligned with the concepts they share.

    After two days I’m coming home with better understanding how we should manage work in our context, whole set of new ideas around Portfolio Kanban, a handful of experiments to try out, especially around estimates, and even more ideas that will influence the way we work. A short version is: my mind has been blown away.

    This is why every senior manager who cares about evolving their organization should be there. If they have a chance, that is. Don’t forget that Kanban Leadership Retreat is limited to 50 people, which is one of the reasons of its high quality.

    I just don’t know whether I should be happy or sad seeing how few company leaders care to learn on that level. I mean, this is a clear competitive advantage we do have. We understand how the work gets done and what it takes to be effective and build trust with our clients. Interestingly enough it seems that such an approach is very rare in our industry.

    At the same time I feel for all the people stuck in the common ways of building software. Clients wasting their money on ineffective work, having limited visibility what is happening in their projects and getting low-quality products. Vendors building the wrong stuff, being squeezed to work overtime as they can’t keep their promises, compromising quality of their work and their own pride in workmanship. That’s just sad. Yet oh so common.

    It won’t change unless we go out of our boxes to see what is happening in management methods. It won’t change unless we move ourselves out of our comfort zones of “we are unique.” It won’t change unless we invest enough effort into learning new things.

    Kanban Leadership Retreat is the event where you can do all of that (and more). All the learning that I experience there helps to bring me as a leader and the company to another level.

    This is why I wouldn’t hesitate a second to sign up for the next year retreat if only was an option to do so. Neither should you if you have a chance to be there next year.

    Hello! Please, tell us how you feel.

    We do this in many ways: send Kudos, submit a topic during the weekly lean coffee, have a real coffee in our comfy kitchen or…

    Draw a smiley on a whiteboard.

    Happiness Chart

    Or beer. Or Pacman. It’s up to you.

    The Happiness Chart is a kind of universal mood indicator. We draw shapes on our whiteboard after each stand-up to tell others about our day. Green goes for “happy”, blue for “so-so” and red means “sad”.

    That’s it! And it works – a couple of :( in a row show that something’s wrong. Get up and do something about it.

    If paired with a project management tool (we use our own Kanbanery), you may evaluate which tasks your team drudged through and which made their day.

    The chart is 100% transparent, just hanging on the wall for everyone to see. Maybe a passer-by got a solution that the whole team was looking for last week? Or a cute cat picture to lighten the atmosphere?

    There’s another reason why we use happiness charts. According to some stereotypical belief, programmers aren’t very keen on talking about their emotions.

    Writing them down, compressing into three “states” (happy, so-so, sad) – that’s a different story.

    IMG_1139

    We’re agile here at Lunar Logic. We didn’t invent gunpowder, these times _everybody _is agile. And that’s a good thing. But not every company has got a portable boss, has it?

    Meet Paweł. A laptop desk, a bean bag and a recycled cardboard box – that’s all he needs to set up a flying office. He can usually be found in a few rooms in our office, shifting his spot couple of times a day. Or a week. The pattern changes. Not only because of a possible back pain – there are several reasons for him being in motion. Some of which have already changed our company.

    I’ve decided to present this method of floor-rule from the viewpoint of the crew of our hierarchically flat company. It’s worth noting that he’s also been the scrum master for some months now and a project manager of one inside work of ours lately, so the working relations between him and various people at the company vary.

    Paweł Brodziński flying office

    Why am I writing this, and not Paweł? Firstly, because he already wrote about his flying desk. Secondly, he’s still the boss and the answer’s he’d receive could be a bit biased. His opinion is here, nonetheless.

    So – do you like having a portable boss?

    Mirek (marketing guy)

    I’m in two minds here. On the one hand, it’s grand when the boss’s your buddy sitting next to you, going together for your daily caffeine fix, cracking jokes and ridiculing one another. He doesn’t feel so detached as if he was confined in his office, behind a massive slab of a desk. More… human? I’d say.

    On the other hand, I can’t fully feel at ease when having him around. My harmless slacking over yet another TechCrunch article I must read makes me uneasy, for it doesn’t yield palpable results. Somehow I experience a state of a constant standup – it feels bad to say “I haven’t done anything today” even though I know I was busy.

    However, the more tangibly productive I am, the less I’m aware of the boss in the room and the more of a peer, who just decides about things.

    Hania (coder)

    Paweł is a scrum master in my project. I grew so accustomed to him being around that when he’s not present, I’m wondering what’s the reason. His presence in our room makes him aware of the problems on the spot and not only at the retrospective. It also levels the traces of hierarchy that exist in our company to nil.

    Grzester (QA)

    I don’t have to run around to find the boss when he’s needed. He’s not tucked in a room, sheltered behind a desk and closed doors. There is no artificial barrier between us and him.

    There is no “boss” – just a member of the project, who also is the project manager. I think that the integration does the trick – a “traditional” chief with all the trappings of a big leader wouldn’t be able to mold with us so easily.

    Tomek (coder)

    I’ve finally witnessed what CEOs do. This role was overlapped, however, by his actions as the project manager. Deliberately, Paweł has become our peer, whether he liked it or not (and he rather liked it ;).

    Paweł (the culprit)

    I can grab my flying office in my hands and move it to the place where I’m needed or I feel like I can be helpful. I need just a bit of space in a corner or by the wall and done – a new office set up.

    Surprisingly, sitting in the corner and almost on the floor has a few unexpected advantages . First, you need very little physical space, which means you will fit to almost any room (unless it is already packed beyond any healthy limits). Second, this way you become almost invisible, which definitely helps if your goal is to understand how the team functions, and not just scratch the surface.

    Third, and arguably most importantly, you strip yourself from status symbols. Instead of a huge desk dubbed by your colleagues as the airstrip, a leather armchair and a locker just the simplest set that does the job.

    All in all, you’re way more accessible and much less intimidating. Isn’t that something every single leader should strive for?

    It’s a big step

    I’ve been very cautious not to say that something “depends“. I believe that this word strips a thought of its power. Truly, being a portable boss takes guts. Be prepared to experience uneasiness and awkward looks and situations.

    Of course, it is easier for an accomplished extrovert to become a flying Dutchman in his office. It doesn’t mean, though, that, given favorable company culture, a timid number cruncher shouldn’t change his working place every once in a while.

    Especially if they used to live in their office, behind a huge desk and closed doors.

    Stay tuned for How we Work pt. 2. You’ll see our happiness measurement system.

    Why is synchronization so important?

    More and more web apps have their mobile equivalents; it is prudent to add offline functionality to mobile apps. Here’s where the real problem unravels – how to optimally synchronize data saved on the mobile with changes in the web app?

    What if a mobile user saved some data offline which was being simultaneously edited online? Which version should be selected as the up-to-date one? How to implement solutions for such conflicts?

    Implementation of the above mentioned synchronization isn’t easy – there’s a lot of edge cases, it requires a well-thought-out protocol, special metadata (e.g. revision tree) and it has to be able to resolve conflicts. And there aren’t any shortcuts like “the simplest” synchronization type. The implementation of even the least complicated version requires an awful lot of work and causes many problems.

    We’ve tried to introduce such “simplest” type in Kanbanery and it really got on our nerves. I’ve started to wonder back then: “Isn’t there a better way to achieve the recently much-wanted functionality?

    I really liked this quote I had found on GitHub:

    “Some mobile developers have waded into ad-hoc sync implementations and found themselves over their heads, with delayed or canceled products. It’s better to use a solution that already works.”

    It turned out that the “solution” mentioned was CouchDB and the person cited, Jens Alfke, is the co-author of Couchbase Lite, a library that implements CouchDB protocol on iOS.

    Why use Couchbase Lite?

    Mainly because it’s lightweight. It has small code size, quick startup, low memory usage and good enough performance.

    It is using sqlite as the database engine instead of real CouchDB embedded on mobile because it’s more efficient. There used to be an implementation using CouchDB, but it was impossible to optimize it so the library was completely rewritten. However, due to its use of an efficient and reliable REST-based protocol pioneered by Apache CouchDB it is fully able to synchronize with real CouchDB instance.

    This synchronization can be on-demand or continuous. Conflicts can be detected and resolved. Synchronization is handled using the replication feature of CouchDB.

    Conceptually, it’s very simple – just take everything that’s changed in database A and copy it over to database B. Replication has several properties – there are push and pull replications (depends on if the source is remote or not), continuous (keeps connection opened and waits for changes) or one-shot, persistent or non-persistent.

    Everything sounds great,

    but what if you don’t want to download all the users with all their data to your small mobile app (which is kind of pretty common)? Use filters!  Just define a method which will decide which data you want to synchronize :]

    To get everything working on iOS side you need to have the Couchbase Lite framework.

    Next step – database setup:

    CBLDatabase *database = [[CBLManager sharedInstance] createDatabaseNamed: kDatabaseName error: &amp;error];

    An example of a model definition:

    #import <Foundation/Foundation.h>;
    #import "BaseModel.h"

    @interface Post : CBLModel

    @property (strong, nonatomic) NSString *title;
    @property (strong, nonatomic) NSString *body;
    @property (strong, nonatomic) NSString *user_id;

    @end
    #import "Post.h"

    @implementation Post

    @dynamic title, body, user_id;

    @end

    To retrieve user records by email, you have to define the view:

    [[[DataStore currentDatabase] viewNamed: UserByEmailView] setMapBlock: MAPBLOCK({
    NSString *type = doc[@"type"];
    id email = doc[@"email"];
    if ([type isEqualToString: @"User"] &amp;&amp; email) emit(email, doc);
    }) reduceBlock: nil version: @"1.0"];

    And then you can make a query like that:

    CBLQuery *query = [[[DataStore currentDatabase] viewNamed: UserByEmailView] query];
    query.keys = @[anEmail];
    for (CBLQueryRow *row in query.rows) {
    NSLog(@"%@", row.value);
    }

    CBLQuery has also property called rowsIfChanged, which returns new rows values if they’ve changed, so you can register observer to that property and watch for changes that way.

    But if you want to just display your data in some kind of UITableView there is a more handy solution – CBLUITableSource. It refreshes data as soon as it changes – an instant gratification!

    To use it you have to:

    • put one in the same xib as your table view,
    • set its tableView property to your table view (it’s IBOutlet so you can just wire it up),
    • set its query property to live query: <div class="codecolorer-container objc dawn" style="overflow:auto;white-space:nowrap;width:100%;">
      CBLLiveQuery* query = [[[[DataStore currentDatabase] viewNamed: PostByTitleView] query] asLiveQuery];
      self.dataSource.query = query;

      </div>

    • set its labelProperty to text that you want to display or use CBLUITableDelegate protocol: <div class="codecolorer-container objc dawn" style="overflow:auto;white-space:nowrap;width:100%;">
      - (void) couchTableSource: (CBLUITableSource*) source willUseCell: (UITableViewCell*) cell forRow: (CBLQueryRow*) row {
      NSDictionary* properties = row.value;
      cell.textLabel.text = properties[@"title"];
      cell.detailTextLabel.text = properties[@"body"];
      cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator;
      }

      </div>

    Can I finally start replication?

    Yes, all you need to replicate is to run the following code:

    NSArray* replications = [[DataStore currentDatabase] replicateWithURL: [NSURL URLWithString: kSyncURL]
    exclusively: YES];
    self.pull = [replications objectAtIndex: ];
    self.push = [replications objectAtIndex: 1];

    // you can set more replication properties here eg.:
    // self.pull.filter = @"Post/for_user";
    // self.pull.query_params = @{ email: @"ania@example.com" };
    // self.pull.continuous = YES;

    [self.pull start];
    [self.push start];

    Meantime in the Ruby world…

    There is the couchrest_model gem that you can use in Ruby side to

    communicate with CouchDB.

    It is very straightforward to use. Here you have an example of a model definition:

    class Post &lt; CouchRest::Model::Base
    property :title, String
    property :body, String

    belongs_to :user

    design do
    view :all
    view :by_user_id_and__id
    end
    end

    Because of views defined in design section you can retrieve records like this:

    def load_user
    @user = User.get(params[:user_id])
    end

    def load_post
    @post = Post.by_user_id_and__id.key([params[:user_id], params[:id]]).first
    end

    A defining filter that tells the protocol to synchronize only the documents which describe posts and belong to the requested user would look like this:

    design do
    filter :for_user, "function(doc, req) {
    if(doc._deleted) {
    return true;
    }

    if (doc.type && ((doc.type == 'Post' && req.query.user_id == doc.user_id)
    || (doc.type == 'User' && doc['_id'] == req.query.user_id))) {
    return true;
    }

    return false;
    }"

    end

    The only thing left to do is to send these definitions to the CouchDB instance. It should be done as quickly as possible, as mobile application also should be able to use it when connecting to the CouchDB instance.

    The best way to do that is put the code below to config/initializers/couchdb.rb.

    Rails.application.eager_load!

    CouchRest::Model::Base.descendants.each do |model|
    if model.respond_to?(:design_doc)
    model.design_doc.sync
    end
    end

    The code is iterating through your CouchRest::Model’s and pushing it’s

    designs definitions to CouchDB server.

    But what about conflicts?

    If document is edited both offline and online, so if there is a conflict,

    it will have the have _conflicts key.

    So on iOS you can create a view to retrieve conflicting documents:

    [[[DataStore currentDatabase] viewNamed: PostByConflicts] setMapBlock: MAPBLOCK({
    NSString *type = [doc objectForKey: @"type"];
    id conflicts = [doc objectForKey: @"_conflicts"];

    if ([type isEqualToString: @"Post"] &amp;&amp; conflicts) {
    emit(conflicts, doc);
    }
    }) reduceBlock: nil version: @"1.0"];

    The document usually has one current revision, but when a conflict occurs,

    the protocol is keeping all conflicting revisions in order to give you a

    chance to resolve the conflict manually.

    Below there is an Objective-C example of retrieving conflicting revisions

    properties:

    CBLQuery* query = [[[DataStore currentDatabase] viewNamed: PostByConflicts] query];

    for (CBLQueryRow *row in query.rows) {
    NSError *error = nil;

    for (CBLRevision *revision in [row.document getConflictingRevisions: &amp;error]) {
    NSLog(@"%@", revision.properties);
    }
    }

    If you’re going to create some kind of UI displaying these conflicts and let user decide what should be chosen, then you should not delete needed revisions and optionally merge contents from them to one chosen by user. However, if you don’t want to resolve conflicts manually, the protocol will preserve the illusion that there is no conflict by arbitrarily choosing one of the current revisions for you (the one with lexicographically higher revision ID).

    Example projects

    You can find both the web application and the iPhone client example projects on my github account, demonstrating all the things I mentioned here.

    Conclusion

    CouchDB *just* works and a bunch of people is working on it trying to make it more and more efficient. Don’t go mad implementing such complicated thing from scratch. Use CouchDB.

    Do you prefer Rails to holidays? Is being cool in a cool room more appealing to you than sitting on the beach? Here’s a recipe for doing something awesome in the upcoming summer:

    – open your browser,

    – apply at our internship page,

    – wait anxiously for our swift reply!

    If you’re still wondering whether you should apply, here are some words from our last year interns – Ania and Artur. You can meet them at our office, busy doing great design/front-end and RoR work.

    Ania

    Ania Migas Lunar Logic

    The internship was superb – I’ve learned a lot more within 3 months here than within 3 years of studies. I had a chance to learn from the best – our designers really rock! – and use the most recent technologies. Every day I had an opportunity to get feedback on what I was doing and to hear useful tips. I was shocked when I received a Kudos just after a week of work and amendments in a commercial project only after 1,5 month!

    ArturArtur Trzop

    The summer internship @ LL made me realise how important testing software is. I’ve learned about lots of useful tools and technologies and became a decent Vim user :) The help and experience from older workmates is invaluable – pair programming and code reviews helped me learn faster and cooperate better. What is more, developing projects along the Scrum and Kanban methodology lines allowed me to learn them from the practical side.

    So, what are your plans for this summer?

    The most curious Rails developers invaded Krakow last week, so we led an invasion party of our own :)

    Words of Jul:

    Railsberry was my first conference for developers and it set the bar really, really high. It was incredibly well organized, the sun was shining, people were happy, speakers were amazing, there were swings and a DJ, yummy food, sunbeds and good parties. And gadgets, gotta love the gadgets ;)

    Here’s 3 of the many thoughts that stuck with me after the event:

    Sometimes it’s better to use direct queries and take full advantage of the features of your database instead of always doing things the Rails way – from Agnieszka Figiel’s presentation about Słonik.

    Overdoing usability can be dangerous. When a login form tells the user that their password was incorrect, the bad guys know that the login exists and can brute-force it away. It’s obvious to me now, but it’s easy to miss when you’re designing a user friendly interface… or when you’re a forgetful and irritable user ;) – from Paolo Perego’s presentation on security.

    We associate creativity with being human (or maybe the other way around), but if we accept the fact that the processes involved are often unconscious, we might want to consider attributing creativity to machines. True, machines can only generate things from what we put into them, but isn’t this true about us as well? Especially if you’re an empiricist that isn’t afraid of determinism ;) So maybe being unpredictable, beautiful and cool is enough to call something creative – from Joseph Wilk’s talk about creative machines.

    As you can see, the talks were very diverse and I loved it. After all, we’re an agile, poly-skilled, curious bunch. I will definitely be at the next Railsberry.

    Fred George on stage #railsberry

    Hania & Ania L.’s report:

    A pink unicorn, lots of baloons and two interesting presentations were the highlights of the first day:

    “Experiment”Chad Fowler presented very curious thoughts, approach to life and to coding :)  It encouraged you to treat everything as an experiment, without the fear of failure. A big refactor or keeping fit is way easier with this in mind :)

    “Agile is the New Black” – a nice presentation about methodology by Fred George, though a bit too radical. Isn’t Agile meant to be resistant to changing circumstances?

    Besides that it was great to learn why you should use PostgreSQL and that it’s sometimes better to let the database do its work rather than limit yourself with ActiveRecord. Yes, we admit it – we had our share of fun gossiping with @agnessa480 :)

    Hania: I was especially enthralled by Joseph Wilk’s talk, especially because of the fact that I still think about connecting what I do with the world of music (Creative Machines).

    Joseph Wilk

    Day two: 

    Ania: Gregg Pollack gave a very motivating presentation. I had already heard about the e-learning sites he enumerated, yet it took this presentation to convince me to use them.

    All in all, the presentations were of high quality and the place abounded in attractions and ways to have fun. The conference itself was neatly organised and we think the Stara Zajezdnia (the venue) had an awesome feel; extra decorations, such as swings and deckchairs made us feel like being on holidays. The conference ended with an awesome flying drone show – the harmless drones, mind you :)

    #railsberry crowd getting ready for the start!

    Ania Migas says:

    Railsberry was like a kindergarten for programmers and all kind of people connected with IT – there were augmented reality workshops, swings, flying robots, muffins and all kinds of fun stuff you could imagine.

    I really liked the talk by Gregg Pollack – I learned a lot more about e-learning than I had already knew.The Joseph Wilk’s talk about Creative Machines just blew my mind – he presented something that could be called artificial thinking – the computers were creating their own melodies. Of course, as a part of a super-agile company I couldn’t miss the opportunity to check how agile actually we are during Fred George’s talk – we did pretty well! :)

    Artur’s thoughts:

    The first day of the conference went on really well. The first talk by Chad Fowler was a experiment on its own, for it was created in tpp. The second talk was probably the best presentation that day – Fred Gorge elaborated on “Agile is the New Black” idea: what struck me the most is the idea that bug tracking systems are bad, because they they make bug fixing in apps longer. It’s so easy to put something aside knowing that it’s saved somewhere. But is leaving bugs for later such a good decision?

    The second day met us with sun and another dose of experiments. Marcin Bunsch & Antek Piechnik riveted audience’s attention with their “Shipping Post-PC” breakfast conversation. They staged an interesting show, with getbase.com  used as a example of a model Post-PC app. The second presentation that captivated us was “Programming Flying Robots With JavaScript” by Felix Geisendoerfer. The flying robot received a storm of applause :)

    Lucek speaks!

    Boredom was prohibited; there was not a single too long or wearisome presentation in the agenda. Fred George proved that Lunar Logic is the most agile of the agile companies and Agnieszka Figiel shown us magic tricks while searching for records in the PostgreSQL base. Katrina Owen and Pablo Perego put forth ways of maximising the benefits deriving from application tests and Greg Pollack touched on the subject of the lately popular e-learning platforms (Vim Adventures FTW :D).

    This, and more left me eagerly waiting for the next edition. See you next year!

    We think there are too few female coders.  Well, actually, this topic is a bit hotter than that, yet still – correct me if I’m wrong, but males dominate in the coding world. That’s precisely the reason why we’ve decided to support Rails Girls Krakow 2013 :)

    For those who aren’t familiar with the formula, a quick recap: Rails Girls started in Finland in 2010 as quite a small RoR workshop for women, especially those with little or no knowledge of programming. It has grown since into a worldwide event, with thousands of Rails Girls involved.

    Rails Girls Kraków at work

    If you’re already thinking of it as a one-off event, with girls returning to their normal lives after a day full of programming, it’s not true – one of the participants of RG Kraków 2012 learnt to code and developed a successful vegetable fable app :)

    Rails Girls Krakow Friday Hug

     

    So, what happened on the 19th and 20th of April 2013? We sent our best Rails wizards to aid the R-Girls coach team: Hania, Ania & Ania and Jul. Firstly, it was the Friday installation party during which the coaches helped every lady in Rails distress. On Saturday the 20th, it was just hardcore coding, with sweat, tears and debugging. After that, the participants filled their programming Bentobox with all the necessities of the web, like AJAX, MongoDB, RoR (surprise, surprise) and so on… Needless to say, some of the Rails Girls fully understood what “buffer overflow” means on that day :)

    Soon, it was time for the lightning talks. However, just before that we’ve sweetened the workshops up a bit with Lunar Cake Pops, so Paweł could deliver his LT about leadership in Lunar Logic to a sugar-propelled audience; he also presented our custom-made summer internship page for Rails Girls only :) Mirek tucked in a few words about Hackerspace Kraków, followed by U2I’s presentation of the Unity engine and Base’s awesome depiction of a programmer’s life.

    Lunar Lollipop

    We’d like to thanks our friends from WebMuses and Applicake for allowing us to help with this awesome event :)

    If you’ ve just fallen in love with Ruby on Rails, don’t forget to sign up for our upcoming summertime internship joy with Rails!

     

    Photo courtesy of Wojciech Mardyła and Katarzyna Nogaj.

    7-strong! Tough! Ready for action! Lunar Crew is going to Railsberry.

    We couldn’t miss a Rails conference in our own, beautiful Krakow.
    And a conference for curious Rails devs? We were unable to resist!

    This masterpiece comes straight from our Applicake mates and we can’t wait to dive into the world of Rails, especially served in such a scientific manner :)

    Railsberry 2013 teaser

    What’s more? Check out the railsberry blog and  meet us in the cheerful, Ruby-coloured crowd! And expect a mission report after the conference :)

    What’s the easiest way to grab the attention of a Rubyist on a big, long conference?

    With cola.

    Well, not only with cola, to be honest. Firstly, it’s Kola, like in Fritz-Kola. Secondly, it’s a hand-made Rocket Edition, as one guy dubbed it. Thirdly… oh, well, just decipher the code below:

    Lunar Kola Base64Forgive me the eerie fingernail – it’s due to me getting high on glue. That tiny bit of Base64 is an idea by our scrum master and it leads to a website that might ring a bell in a tabletop Warhammer games –

    http://codehulk.lunarlogicpolska.com/

    If you’re already bored, skip to the space coder section. If not – I need to confess that being a kola craftsman is utterly ridiculous.

    Here’s where the /b/ sections starts. Fingernails are nothing compared to my flip-flops.

    Gruesomely Grisly Grind

    Lunar Kola preparations

    I purchased 96 bottles of Fritz-Kola, had a printing shop prepare diamond-shaped rockety stickers for the bottles’ necks as well as unfortunately not sticky front labels. What else was left to do?

    I had set my shoulder to the wheel: I put the Kola in a bath full of ice-cold water for an overnight; ungluing them was a thrilling experience!

    Then I tempered the glue with water, stuck the diamond embellishment to the bottle’s neck and began inhaling the glue… that is, carefully sticking the front labels to the bottles, wary to preserve the code underneath. I even checked the code on the first bottle by looking at the sun through the glass! Slowly, the Lunar Kola army ranks began to increase as my strength started to wither and the flip-flops were more flop than flip.

    Finally, dazzled and exhausted after more than four hours of gluing alone, the batch of hand-made Lunar Kola was ready to go and amaze the Ruby people of Wroc_love.rb 2013. I’ll touch on this matter later on, but before that I wanted to explain a bit more about the whole hidden challenge.

    Howdy, Space Coder!

    http://codehulk.lunarlogicpolska.com/

    Code Hulk is basically a web app that checks your coding awesomeness. The texts & concept are mine, Mariusz did the design and Adam‘s code propels the whole app. It’s loosely based on the Space Hulk game, the Iron Sky film and some of our own internal exercises. It ruthlessly checks your programming skills.

    In space. Versus space Nazis. We built it in around a week of non-full-time work and even got a nice test coverage.

    The app consists of five phases with original, fast-paced story line full of unexpected twists and without any cliffhangers. The audience was thrilled and begged for more.

    Erm… where were we? Let’s get back to the Ruby folk and their reaction to Code Hulk…

    Kola! Kola! Kola!

    I must admit: I was anxiously anticipating the outcome. Would they notice the code? Won’t they get frustrated going through the seemingly endless exercises? During the first day of the conference I’ve deployed the Kola, opened up Google Analytics with its real-time monitor and waited. Like a flak operator at the radar, looking for that one blip.

    Blip!

    Huzzah! Allright! Someone found the code! It wasn’t really an epidemic spread, but some folks took the bottles home and tried to crack the app there. During the second day we run out of kola in less than a minute. Handing it out just after dinner break was a tiny bit Machiavellian, I suppose. We laughed demonically during the break. All in all, it seems that for every bottle we had at least 2-3 visitors without much aggressive marketing on our side (the conference-induced traffic only).

    All in all, I think that the idea was fine, yet the execution lacks polish. Maybe the code should be exposed more? Fewer exercises? More bottles? Well, that calls for some conference testing. I’m sure of one thing – I really enjoyed doing it. Even while being half-submerged in ice cold water or getting increasingly intoxicated by glue.

    Well, to be honest, we don’t know. But there’s an easy way to measure that – just go to the Krakow Ruby User Group’s meeting!

    Krakow Ruby User Group

    The formula is simple – people meet, talk and discuss topics from the Ruby world. For the most of the time, there were only Kraków-based programmers, but just lately we had two visitors from DRUG (Lower Silesian Ruby User Group), including one of the organisers of Wroc_love.rb – Paweł Pacana and his Webmachine (Ruby) presentation.

    KRUG meeting on 8.12.2011

    KRUG is quite an old bunch – they’ve been active since 2006 and mined the rubies during RailsDay-like workshops, met in various venues  and even visited some of Krakow’s universities. They’ve already talked about Backbone.js TDD with Jasmine, Crawlable Ajax Applications or automating boring tasks with Chef. And more, Ruby everywhere! We sometimes drop a beer or two at the meetings, but certainly don’t push our marketing tentacles inside. Kraków’s Ruby User Group has been independent and thus should remain.

    This Autumn we’re going to celebrate the 7th anniversary of KRUG and it’s more than certain that it calls for a serious Ruby fest.

    The next KRUG meeting will take place on the 9th of April 2013 @ Google for Entrepreneurs Kraków (Rynek Główny 14). Ania Leśniak will speak about couchdb synchronisation and Paweł Pierzchała will cover the Hollywood Principle. For more info, stay tuned to the KRUG Meetup!

    I like to do code reviews. I can’t say that I always like the activity itself; it depends of course on who writes the code and on the quality of the code. But I like to make sure that the code I’m going to work with later is reviewed. This might be my perfectionism or some obsession with control, but what’s important is that reviewed code will in general be better than not reviewed code, and no one will argue with that.

    Reviews let you catch obvious bugs before they even reach QA (“extra comma here, this will break IE”), or potential bugs that would only reappear 4 months later when no one remembers what this code does. They help you make sure that everyone follows similar practices and coding guidelines, and let you transfer knowledge easily (“you know, there’s a new method X in Rails 4 that you could use instead”).

    In smaller projects I usually use Gitifier, a git commit notifier for OSX that I wrote some time ago. It lets me review the work of others almost in real time, just minutes after it’s pushed to the repository.

    But in my current project (~10 developers) this is physically impossible – new commits and branches are created so often that the constant distractions drive you mad pretty quickly. So I started using GitHub’s web interface instead, and I made a habit of reviewing latest commits every day before I start coding. But the process of finding which commits exactly were new wasn’t perfect. I was never sure which commit was the last one I’ve seen, and having multiple branches didn’t help either, since GitHub shows them all on one list, mixed together.

    At this point I decided to write a simple tool to automate this. The result is a Bash script which I called git-code-review.

    Here’s how it works: it creates a review subdirectory inside .git in your project’s working copy; in that directory it creates one file for each branch in the repository, and each file stores the hash of the last commit from that branch that you’ve reviewed. (If you know how git branches work in practice, you might notice this is exactly how git stores branch references in .git/refs – that’s true, except these change every time you do git fetch or git pull, and the review data only changes when you do a review.)

    The script is extremely simple to use: there are no options, you just need to run git code-review in your project directory. The first time you do it, it will just remember the current state of the branches. Then you can run it again every morning (or once a week, or whenever you want) – and it will tell you how the branches have changed since then. You don’t need to remember or track the commits anymore, it will do that for you. It will also show you GitHub compare links, so you can just click on them:

    Fetching latest updates...
    remote: Counting objects: 365, done.
    remote: Compressing objects: 100% (147/147), done.
    remote: Total 270 (delta 213), reused 176 (delta 119)
    Receiving objects: 100% (270/270), 92.89 KiB | 97 KiB/s, done.
    Resolving deltas: 100% (213/213), completed with 74 local objects.
    From github.com:jsuder/holepicker
    abc0825..0a711db master -> origin/master
    Branch origin/master updated: 313a63b..0a711db
    -> https://github.com/jsuder/holepicker/compare/313a63b...0a711db

    So far it’s been very helpful in my daily reviews, so I hope it will be of use to someone else too. As usual, feedback / pull requests / issue reports etc. are very welcome.

    I’ve just came back from AgileDevPractices conference that was held in Potsdam. As for the first edition of the event I must say it worked out well. Meeting people from different organizations and discussing different issues is always a learning opportunity.

    This time it was also an opportunity to do a bit of marketing too, making people aware of this awesome Ruby on Rails software shop in Krakow. In fact, given that people at AgileDevPractices understand what agile approach to building software is, they should like working with us even more.

    Anyway, the event started with a workshop day and I had a chance to run a full-day Kanban workshop for a small, but awesome group of people. Instead of making this simply an introductory course to Kanban, I decided to focus on Kanban being a driver of continuous and sustainable improvements. It was a lot of fun and, basing on feedback I’ve received, a decent occasion to learn, too. By the way, if you want to see a summary, here are Peter Saddington’s notes: part 1 and part 2.

    There was quite a good mixture of topics among conference talks, and one thing I specifically liked was a strong focus on testing and quality assurance stuff. Interestingly enough, the talks I liked most were those that were neither about agile nor development nor practices. My personal highlights of the event were Hass Chapman’s session showing how strongly we are rooted in hunters / gatherers history of homo sapiens and Peter Saddington’s keynote on behavioral patterns and understanding what makes us tick.

    I will be boring with this one, but the best part of any conference is always networking, and this time it wasn’t different. Hours of talking about different topics, trying to chew trough new ideas and make them fit into a bigger picture of current experience is like sharpening the saw. Everyone should do that.

    Then there is this awesome experience when you finally meet people you knew from Twitter or blogs for years, but this time they are in their real form, you know, humans, not avatars. The group of friends is bigger again. And I have a couple of ideas concerning who we should invite to the next year’s ACE! conference, too.

    On the top of that I carved some time out to meet a couple Lunar Logic alumni in Berlin: Olga and Marek. In fact, it was even better as I flew with Olga to Berlin so we had even more time to chat about good old times at Lunar. It seems that if you worked in Lunar some time ago you can expect me stalking you some time in the future.

    The last day of the conference started with my keynote on efficiency and busyness. It went very well both in terms of feedback I got and the number of questions, which is always a good indicator whether people got involved. If you’d like to see slides, here they are:

    All in all, the trip to Potsdam and Berlin was definitely worthwhile. I already have a couple of fresh ideas that I want to try out in Lunar. And I hope to see that crowd there next year.

    Wroclaw is famous for its tiny sculptures of dwarves scattered throughout the city. They’re featured in guides, included in tourist trips and widely loved. For the second year Wroclaw has been famous for the thing we all love – Ruby! :)

    Wroc_love.rb started a year ago as a conference organised by DRUG (Lower Silesia Ruby User Group) and amazed everyone. Concise 30m speeches, lightning talks, fishbowl discussions, three (!) official parties – everyone got what they wanted.

    This year seemed even better. We’ve gone there six-strong: Adam, Rudy, Tomek, Phillip, Bartez and Mirek. Five programmers and a guy carrying crates of Lunar Kola. Speaking of which…

    With the Code Hulk challenge hidden under the kola labels :)

    Let the Ruby miners speak!

    Bartez

    I’m happy that I had the opportunity to be at Wroc_love.rb. It showed me what are the currently hot&top&trendy subjects/things in the Ruby world.

    The four main things raised on the conference were:

    Security problems: A few talks and a discussion panel were devoted to this aspect. But definitely the best related talk was given by Richard Schneeman from Heroku who in a very pleasant way described the most common Rails vulnerabilities e.g. Yaml issue or SQL injection.

    Concurrency: Referred to in talks by e.g David Dahl – he explained how he dealt with that problem with the help of jRuby. I’ve also noticed “celluloid” library mentioned a few times which brings concurrent actor-based mechanism to Ruby.

    Techniques: Lots of talks were about techniques of development. There were talks about DCI by Rune Funch Søltoft, DDD by Sławomir Sobótka, lighting talk about Dependency Inversion by Paweł Pierzchała. There were talks by Brian Morton from Yammer and Bryan Helmkamp from CodeClimate encouraging to use many thin models or even better services, which I hope will be a standard.

    From Ruby to other languages: The most interesting talk was given by Jan Stepien, who shown how one language inspired other authors e.g. How Lisp inspired Matz, how Ruby inspired Clojure or Haskell. He concluded: “try to use other languages, at least one”. In that area I have to mention about JavaScript frameworks fight, which was definitely won by Adam Pohorecki. There were no arguments to beat angular.js which he represented.

    Summarising: “Try to use different techniques and languages and watch on security”

    Adam

    The first day of wroc_love.rb’s schedule was dominated by hackathon/open spaces, with one presentation and two discussions to follow. The highlight of the day was an impromptu introduction to Haskell by Jan Stępień, who gave us a whirlwind tour of the language features.

    During the second day we had a short fight of the JS MV* frameworks, where I defended AngularJS against Ember.js, Hexagonal.js and Backbone.js. Unfortunately because of the time constraints we didn’t even get to discuss some important topics like testing, but I feel that AngularJS blew the other frameworks out of the water anyway;)

    The last presentation of the conference was also the best one. Bryan Helmkamp talked about patterns of structuring Rails apps by extracting behavior into different types of objects – value objects, view objects, policies and others. I think every Rails programmer would benefit from watching this talk.

    I think that wroclove.rb was one of the better conferences I went to in the last couple of years. I look forward to attending it next year:)

    Phillip

    Friday, March 1st, 2013  – Day 1

    At first, the location that we were headed to seemed surreal. A set of abandoned looking factory buildings, maybe warehouses. It seemed like we were in the wrong place. But when we arrived, there was a lot of bustle, people sitting around in the signature red wroc_love t-shirt. The Friday venue was a very pleasant, albeit cold, surprise.

    The beginning of the day, up until about 6 o’clock was a very open format. Nothing was set in stone, so mini hackathons and improv lectures sprung up. Later in the day, we had our first taste of the scheduled speakers starting with a discussion about functional vs object oriented programming. After the discussion Stephan Wintermeyer spoke about using a Raspberry Pi to test website response time. He used the Pi as a simple baseline and showed a bunch of methods to increase page loading times. He had a test suite that clicked through a web page and used http, page caching, and cache preheating to reduce the page load times by more than 600%. One of the major points he made was to think about caching when you start a project, not when speed becomes an issue.

    Next up was a fishbowl format discussion – the overarching topic was productivity with discussion ranging about measuring productivity, choosing the right metrics, and opinions about what the most important thing for productivity is. In terms of measuring, there were a few interesting tools mentioned, including Code Climate, Pivotal Tracker. In terms of the most important thing for productivity, there were opinions stating that happiness, health, and team feedback were among the most important things.

    After all these discussions, it ended up being 8 o’clock, and thus, time for a party, which also is an important part of any conference, as that is the time to sit down and talk to fellow developers. To learn their likes, dislikes, and other opinions about all sorts of software topics.

    Saturday March 2nd, 2013 – Day 2

    Saturday started off with a few good talks in the morning, with topics ranging from how to structure and design software to speeding up page. There was a demo regarding client vs server side work, showing that when doing big IO things like uploading photos, you could display the file instantly on the client side, as if it were already uploaded, and then upload it on the server side to have a much snappier UI.

    During this first day there was a lot of talk about concurrency, and the different Ruby implementations out there which include MRI, JRuby and Rubinius. This was a big topic because MRI cannot do concurrency, so the other Ruby implementations are the way to go if you are looking to use all the cores on your machine with your Ruby projects. The Celluloid gem was mentioned every time someone spoke about concurrency, and it sounds like it is very useful in that realm.

    Also a new implementation that is in the works was demoed called Topaz. It is a Ruby implementation based on Python, and looked pretty promising in terms of speed. There was a very interesting four person discussion about various Javascript frameworks including Ember.js, Hexagonal.js, Backbone.js, and Angular.js. Our very own Adam Pohorecki defended the merits of Angular.js and definitely made it obvious that it is very useful and intuitive.

    After all the talks came the lightning talks – the most interesting ones talking about Code Climate, software that statically analyzes your Ruby code and grades it based on a bunch of metrics, Chef, a gem that lets you write deploy scripts for quick, automated server deploys, and BitCoin, an interesting peer to peer currency.

    Sunday March 3nd, 2013 – Day 3

    Sunday started off with a fantastic talk by Jan Stępień showing how other programming languages influenced Ruby, and then in turn, how Ruby influenced other programming languages. The overarching point of his talk was that the language we use influences how we think, and thus, know a lot of programming languages gives us all new perspectives on various problems.

    Before lunch there was another awesome talk about security and secrets by Richard Schneeman. He went through a lot of common security issue including DDoS attacks, memory exploits, parser exploits, and the recently discovered YAML issues. He very clearly explained how each of these work and what to do to avoid having your Ruby app be threatened by these types of attacks. He also talked about where to keep your private information needed for you app (database passwords, third party service authentication info, etc). One of the best ways to do it is using environment variables so that these secrets are never in your repo.

    The second to last talk of the conference was very interesting in that it was a very philosophical talk. It compared the ideas in philosophy with the ideas in programming and how they are very similar. Steve Klabnik explained, with his professor’s jacket on, that the ideas in philosophy have been passed on for thousands of years, and even though they are related to natural language, the principles can be used in the computer science world.

    The last talk of the day was by Bryan Helmkamp, creator of Code Climate. He went through seven very useful refactoring patterns to try and refactor all those fat models each project has floating around. This talk went into detail about each pattern, when to use it, how to use it, and why it helps. All seven of these patterns I could see using in every project. This was definitely one of the best talks give, and is definitely going to be very useful in the future.

    After the talks were finished, there was another round of lightning talks, where LL’s Pawel Pierzchała gave a talk on Dependor, a gem that him and Adam created to facilitate dependency injection in Ruby.

    Final Thoughts

    All in all this was a very awesome conference. Great people, lots of knowledge, a beautiful city. What more could you want? I am already excited for next year!

    Rudy

    I really enjoyed the wroc_love.rb, it was one of the best conferences I attended. The talks were more concerned about practices than frameworks and that was a good thing!

    First day started with a promising FP vs OOP discussion, but in my opinion it drifted away a few times, for example when FP was blamed for being hard to use. I would rather say that all new languages, especially with a different paradigm, are hard in the beginning.

    However, the second day discussion, Angular vs Hexagonal vs Backbone vs Ember was great, pros and cons of those technologies were covered, arguments were concentrated on the technology. Our very own Adam Pohorecki did a great job fighting for Angular. Conference ended with an amazing design talk – Refactoring fat models with patterns, delivered by Bryan Helmkamp, I couldn’t agree more, I have codeclimate badges in all open source projects I work on. :)

    Lastly, I gave a lightning talk about dependency inversion – http://wrozka.github.com/dependor-wroclove/di.html.

    It seems that the Lunar crew will frequent wroc_love.rb 2014 :) See you then!

    The idea of this article is to show you the new possibilities of creating web components for your own pages. It”s not always a requirement to add a bunch of JavaScript code and third party libraries to your project just to create a new component. Depending on the component, you can find new solutions to implement it. After reading this article I hope that you will be able to see a new world of possibilities right in front of you.

    Most of the components we can see around the wide web requires some kind of interaction by the user. Take as an example a very common type of component that we can see in many websites, a carousel. The basic functionality of a carousel is to show a content block allowing a continuous interaction through all the content blocks. Below is an example of a carousel using YUI3.

    image slider example

    Image slider example

    To use this widget on your website you need to add the YUI3 library (21.71 kb), a small CSS code for the look-and-feel (3.12 kb) and then fire the framework to build the widget using your HTML structure. If you are a performance addict you would get crazy with the amount of code your page will need to load. But what if I tell you that you can build the same component with no need of any JavaScript code? It”s perfectly possible if you use a little imagination and the tools which you already have.

    Shut up and show me the code

    To start I will introduce you to the basic concept which we will use to develop our sample code and that can be used for you in the future to develop your own widgets.

    Thinking about carousels we know that the user needs to click on a kind of button which will fire an event that will make the carousel show the next content block. Then we have states that will change in response to the user”s interaction. So, we need something to handle the click event of the user and store the current state and then the interface needs to change to represent the new state. Below there”s an example of everything we need in a carousel: states that change in response to the user”s interaction and update the component on the page.

    The most important part of this example is how it changes when we activate the different radio buttons. Below is the HTML structure of the example.

    <div class="carousel"><input id="first-page" checked="checked" name="controls" type="radio" />
    <input id="second-page" name="controls" type="radio" />
    <input id="third-page" name="controls" type="radio" />
    <div class="move-container">
    <div class="page first">A</div>
    <div class="page second">B</div>
    <div class="page third">C</div>
    </div>
    </div>

    Each rectangle is called a page element and is 100px wide. In the code fragment below it”s clear that we use only CSS selectors to define the different states and what visually changes when each radio button is activated. I”m omitting the CSS code needed only for styling the elements, but you can see the complete example later.

    .carousel {
    display: inline-block;
    position: relative;
    width: 100px; /* the same width of the page */
    }

    .page {
    float: left;
    }

    .move-container {
    /* needs to fit all the pages inside */
    width: 300px; /* = page.width * 3 */
    }

    #second-page:checked ~ .move-container {
    /* moves one page to the right */
    margin-left: -100px; /* = page.width * -1 */
    }

    #third-page:checked ~ .move-container {
    /* moves two pages to the right */
    margin-left: -200px; /* = page.width * -2 */
    }

    With this we can evolve the example to a more elaborated version hiding the radio buttons and allowing the user only to click to move to the next page of the carousel. For that, we can style content inside of a label element that targets each radio button in our controls as you can see in the example below.

    For that example we”ve created simple elements to represent our buttons. When clicked, they will check the radio button and switch to the next state.

    <div class="carousel"><input id="first-page" checked="checked" name="controls" type="radio" />
    <input id="second-page" name="controls" type="radio" />
    <input id="third-page" name="controls" type="radio" /><label class="for-second-page" for="second-page">
    <span class="button next">next</span>
    </label>
    <label class="for-third-page" for="third-page">
    <span class="button next">next</span>
    </label>
    <label class="for-first-page" for="first-page">
    <span class="button next">next</span>
    </label>
    <div class="move-container">
    <div class="page first">A</div>
    <div class="page second">B</div>
    <div class="page third">C</div>
    </div>
    </div>

    Then we hide all the radio buttons and define the state of each next button that will appear in every possible state.

    .carousel input {
    display: none;
    }

    #first-page:checked ~ .for-first-page,
    #first-page:checked ~ .for-third-page,

    #second-page:checked ~ .for-first-page,
    #second-page:checked ~ .for-second-page,

    #third-page:checked ~ .for-third-page,
    #third-page:checked ~ .for-second-page {
    display: none;
    }

    The complete example 2 is also available to you.

    Now we can create the possibility to move left and wrap around. For that we just need to duplicate the control elements to have some control focused only on moving left. We also need two containers for our pages, one which moves left and another which moves right; each button moves the respective container.

    <div class="carousel"><input id="first-page-left" checked="checked" name="controls-left" type="radio" />
    <input id="second-page-left" name="controls-left" type="radio" />
    <input id="third-page-left" name="controls-left" type="radio" /><label class="for-second-page-left" for="second-page-left">
    <span class="button next">prev</span>
    </label>
    <label class="for-third-page-left" for="third-page-left">
    <span class="button next">prev</span>
    </label>
    <label class="for-first-page-left" for="first-page-left">
    <span class="button next">prev</span>
    </label><input id="first-page-right" checked="checked" name="controls-right" type="radio" />
    <input id="second-page-right" name="controls-right" type="radio" />
    <input id="third-page-right" name="controls-right" type="radio" /><label class="for-second-page-right" for="second-page-right">
    <span class="button next">next</span>
    </label><label class="for-third-page-right" for="third-page-right">
    <span class="button next">next</span>
    </label>

    <label class="for-first-page-right" for="first-page-right">
    <span class="button next">next</span>
    </label>
    <div class="move-left-container">
    <div class="move-right-container">
    <div class="page second">B</div>
    <div class="page third">C</div>
    <div class="page first">A</div>
    <div class="page second">B</div>
    <div class="page third">C</div>
    </div>
    </div>
    </div>

    Then update the CSS selectors to handle the new states.

    .move-right-container, .move-left-container {
    /* needs to fit all the pages inside */
    width: 600px; /* = page.width * 6 */
    }

    .move-left-container {
    margin-left: -200px; /* initial state */
    }

    #second-page-left:checked ~ .move-left-container {
    /* moves one page to the right */
    margin-left: -100px; /* = page.width * -1 */
    }

    #third-page-left:checked ~ .move-left-container {
    /* moves two pages to the right */
    margin-left: ; /* = normal position 0 */
    }

    #second-page-right:checked ~ .move-left-container .move-right-container {
    /* moves one page to the right */
    margin-left: -100px; /* = page.width * -1 */
    }

    #third-page-right:checked ~ .move-left-container .move-right-container {
    /* moves two pages to the right */
    margin-left: -200px; /* = page.width * -2 */
    }

    #first-page-left:checked ~ .for-first-page-left,
    #first-page-left:checked ~ .for-third-page-left,

    #second-page-left:checked ~ .for-first-page-left,
    #second-page-left:checked ~ .for-second-page-left,

    #third-page-left:checked ~ .for-third-page-left,
    #third-page-left:checked ~ .for-second-page-left,

    #first-page-right:checked ~ .for-first-page-right,
    #first-page-right:checked ~ .for-third-page-right,

    #second-page-right:checked ~ .for-first-page-right,
    #second-page-right:checked ~ .for-second-page-right,

    #third-page-right:checked ~ .for-third-page-right,
    #third-page-right:checked ~ .for-second-page-right {
    display: none;
    }

    After that, we can see our example working like below.

    Finally, after adding styling and some little effects we have our amazing carousel with no single line of JavaScript code.

    Creating widgets only using CSS can reduce considerably the size and loading time of your pages and is a task full of creativity. Others are creating interesting things using similar techniques. Create your own widgets to amaze the world.

    Let’s say you have a CSS transition that takes an element and scales it two times on mouseover. Your code probably looks like this:

    #scaled-thing {
    -webkit-transition:all 1s linear;
    -moz-transition:all 1s linear;
    -ms-transition:all 1s linear;
    -o-transition:all 1s linear;
    transition:all 1s linear;
    }

    #scaled-thing:hover {
    -webkit-transform:scale(2.0);
    -moz-transform:scale(2.0);
    -ms-transform:scale(2.0);
    -o-transform:scale(2.0);
    transform:scale(2.0);
    }

    It’s all good in the hood until you come across a really complex DOM with a lot of things going on – then things usually get choppy and slower in older WebKits. That’s because your browser takes your 2D transition and forces it through your CPU. What to do in order to make it smooth?

    Fortunately, there’s an extremely simple hack that allows you to force the browser to enable GPU rendering for CSS transitions – just take your code and tell the browser it’s actually a 3D transition even though it isn’t. The code after the change would look something like this:

    #scaled-thing {
    -webkit-transform:translateZ();

    -webkit-transition:all 1s linear;
    -moz-transition:all 1s linear;
    -ms-transition:all 1s linear;
    -o-transition:all 1s linear;
    transition:all 1s linear;
    }

    #scaled-thing:hover {
    -webkit-transform:scale(2.0);
    -moz-transform:scale(2.0);
    -ms-transform:scale(2.0);
    -o-transform:scale(2.0);
    transform:scale(2.0);
    }

    (See this example on JSfiddle)

    It has its drawbacks, of course – the screen might blink or lose colour profile when starting a transition, but hey – hacks aren’t called so without a reason :)

    Kudos for Kuba :)“Kudos” US[ˈkjuːdɒs], UK[ˈkuːdɒs] – _1. Fame and renown resulting from an act or achievement. 2. Praise given for achievement.*__

    _

    Here, in Lunar Logic, we love to give people random geeky gadgets or long-lasting cinema vouchers. Immediately after that I take a photo of the gift recipient and post it to our Facebook, saying nice things about that person. Sometimes they are so corteous that I’ve got to be deliberately nasty later on. Just to keep the balance, you know.

    Seems wacky, isn’t it? Yet, it’s our way of expressing gratitude and respect. We’ve even got a strange, outlandish word for this activity: Kudos.

    Kudos may take very different forms. Some people get straight to the point, while others laud the overall performance of the person they value. The goal is simple: to tell a person that they’re great. And not (only) by the boss – a Kudos expresses a real high five by a peer!

    All Kudos are anonymous and thus far we haven’t experienced any mutual admiration societies. However, people sometimes grant Kudos to themselves. No one sees you fixing that cranky server after hours or extinguishing fire in a project during Christmas :)

    Ok, some of the Kudos are purely out of kindness, but… Lunar Logic is not only about coding. We love the fact that instead of having a coffee break we might as well have a song break, play fussball or meet for board gaming after work :)

    Paweł, Lunar Logic’s leader, loves to give out Kudos:

    It’s fun to get Kudos. After all, who wouldn’t love to hear supportive feedback on their work and then get an awesome gadget chosen from a wide range of utterly useless but irresistibly funny stuff. On the top of that one can play a celebrity with all those photos, publicity and what have you.

    Receiving Kudos isn’t the best part of the story though. The best part is this warm fuzzy feeling you have when you give someone Kudos. I mean, they’ll never know it was you who started it. It just feels great to execute your power of doing something nice to someone who deserved that.

    Oh, and by the way I love my job as a messenger. Thanks to this, I just know I’m in a business of making people happy. A dream job if you ask me.

    Paul, LL’s owner, wrote this beautiful post about spy network in Lunar that’ll also give you some insight into our eerie custom.

     

    Let’s say you want to modify a file in your repository locally and don’t commit that. For example, you might want to use a different .rvmrc than the rest of the team, or you might want to disable a non-essential gem or a part of the system that has some problems on your OS.

    Of course an ideal way would be to fix it in the repo so that it works for everyone, but sometimes that’s not possible. You could make the change and just remember not to commit it, but you can be sure you’ll forget sooner or later.

    In this case, the solution is to mark the file as ignored in your local repository. There is something like a “local gitignore”, which is located at .git/info/exclude, but that won’t work if the file is already in the repository. You need to use the update-index command instead:

    git update-index --assume-unchanged .rvmrc

    The file will just disappear from git status and will behave as if you didn’t modify it at all. If you change your mind, this is how you remove that “unchanged” flag:

    git update-index --no-assume-unchanged .rvmrc

    If you forget which files you’ve marked as unchanged and you want to see a list, it seems the only way to do that is by using this command:

    git ls-files -v | grep ^[a-z]

    This works because files marked as unchanged show up in ls-files marked with a lowercase character. (If you know a less hacky way to print that list, let me know…).

    P.S. Don’t forget about the Krakow Ruby User Group meeting tomorrow!

    Edited on July 4, 2013

    Once in a while, as a Ruby developer, you are faced with the situation when a product owner says, “Alright, now it’s time to make it live”. And then you probably think “I’ll be fighting with these stubborn servers for the next few days…”. If you have a very simple app or one at the early stages of its lifetime you can use one of the “no hassle deployment” platforms such as Heroku or OpenShift. But chances are you will need some custom stuff which is difficult to achieve on these kinds of platforms or you just feel better with “root” access.

    You have many options for setting up Linux servers. Amongst the most popular ones are Chef and Puppet. Various hosting providers also add their own solutions for provisioning boxes (such as Stackscripts on Linode). Or you can do it “the old-school way”, manually. If you don’t need multiple machines and/or you have just a simple Rails site then provisioning tools might be an overkill. Also I believe any Ruby developer should configure the production server from scratch at least once to get familiar with this stuff and to learn where to look when troubleshooting server side problems.

    Recently, I led a workshop about these things here at LLP and we decided to compile this knowledge into a blog post to share it with other Ruby developers and to have a known reference point in the future. So here it goes.

    Note: the following steps were tested on Ubuntu 12.04 and 12.10. They don’t include any version-specific commands so they should also work without a problem on newer Ubuntu versions when they get released.

    Preparations

    Let’s assume you just created a VPS box and got an email with root access. Now, login to the server. If you got access to non-root user with sudo access then switch to root with:

    $ sudo -i

    Set preferred editor

    You’ll be configuring the machine by editing several config files. Make sure you have your preferred editor set:

    $ export EDITOR=vim

    Let’s also make it the default editor for future sessions also:

    $ echo "export EDITOR=vim" > /etc/profile.d/editor.sh

    Update apt sources and upgrade base packages

    You’ll be installing packages from Ubuntu repositories. Make sure apt sources are up to date:

    $ apt-get update

    Now run following to install Vim editor (skip it if you prefer to use nano or

    other):

    $ apt-get install vim

    Set server timezone and time

    To save yourself (and your app) some trouble set server’s timezone to UTC:

    $ echo "Etc/UTC" > /etc/timezone

    Let’s also install ntp daemon that will keep server time up to date, all the time:

    $ apt-get install ntp

    Add an user for your app

    You don’t want your app to run as root. Let’s assume your app is named “luna” so let’s add “luna” as the user:

    $ useradd -G sudo -m -s /bin/bash luna

    Allow sudo

    You’ll be logging onto the server as the user “luna” from time to time to make some tweaks. Grant the user sudo access:

    $ echo "luna ALL=NOPASSWD:ALL" > /etc/sudoers.d/luna
    $ chmod 0440 /etc/sudoers.d/luna

    Copy SSH key

    To avoid entering a password (for many reasons) when logging in as “luna” copy your public SSH key to server user’s ~/.ssh/authorized_keys file with the following command:

    $ ssh-copy-id luna@luna.com

    Try ssh’ing now:

    $ ssh luna@luna.com

    You shouldn’t be asked for a password anymore.

    Useful stuff

    Switch to the user “luna”:

    $ su - luna

    Disable the installation of rdoc and ri docs for installed gems to save yourself some time:

    $ echo "gem: --no-rdoc --no-ri" > ~/.gemrc

    Set RAILS_ENV to production so you don’t have to type it when invoking rake:

    $ echo "export RAILS_ENV=production" >> ~/.bashrc

    Ruby

    Now, for ruby, we’ll install and use RVM to installation of ruby 1.9.3.

    Switch back to root and follow the next steps.

    Install RVM

    Here we’ll install RVM globally (so called “system install” as opposed to “user install”).

    This is handy if you want to have several apps or users on the servers.

    Make sure you have the curl command installed:

    $ apt-get install curl

    Install a stable RVM version by piping the installation script to bash:

    $ curl -L get.rvm.io | bash -s stable

    Source rvm script so we don’t need to re-login:

    $ source /etc/profile.d/rvm.sh

    Let’s ignore RVM prompts about trusting .rvmrc files (we’ll use the default gemset

    for Passenger anyway)

    $ echo "export rvm_trust_rvmrcs_flag=0" >> /etc/rvmrc

    RVM access for the user “luna”

    Add the user luna to the rvm group:

    $ usermod -a -G rvm luna

    Install requirements

    See what are the requirements for compiling MRI:

    $ rvm requirements

    Most likely it is in the following list of packages:

    $ apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion

    Install Ruby

    Now, install ruby via RVM:

    $ rvm install 2.0.0

    Make installed ruby a default

    Make it a default for all new shells:

    $ rvm --default use 2.0.0

    Nginx + Passenger

    As far as a webserver is concerned, the combo of Nginx + Passenger works well in most cases.

    Install the passenger gem

    $ gem install passenger

    Install Nginx via the passenger gem

    First install dependencies for Nginx/Passenger:

    $ apt-get install libcurl4-openssl-dev

    Now compile it:

    $ passenger-install-nginx-module

    Just follow the instructions to compile and install nginx.

    Create boot service (upstart)

    The upstart script for Nginx will be used for starting/stopping nginx via the command line and will make sure nginx starts on system boot.

    Download the script:

    $ curl https://gist.github.com/sickill/2492523/raw/d1ecb87e9eba9e59ddd44d3c3aaf6c3c52b16374/nginx.conf

    Start nginx:

    $ start nginx

    And check if it works by looking at the response:

    $ curl localhost

    “Welcome to Nginx” means that everything is fine.

    VHost

    Now we need to create a Virtual Host config for the luna app, replace the default

    server block with the following:

    # /opt/nginx/conf/nginx.conf

    server {

    listen 80;

    server_name www.luna.com luna.com;

    root /home/luna/current/public; # passenger_enabled on;

    }

    Restart Nginx:

    $ restart nginx

    And confirm that it restarted properly:

    $ curl localhost

    You should get the 404 page due to the fact that our app is not running yet.

    MySQL

    Install the MySQL server via apt:

    $ apt-get install mysql-server libmysqlclient-dev

    Create a project database (you will be asked for the mysql root password you set when running the previous installation command):

    $ echo "create database luna_production" | mysql -u root -p

    And grant access to the user luna:

    $ echo "grant all on luna_production.* to luna@localhost identified by 'luna123'" | mysql -u root -p

    Capistrano

    Let’s use Capistrano for deploying new releases of the “luna” app.

    Note: All of the commands in this section are meant to be run on your local machine inside the Rails project directory (unless otherwise stated).

    Add capistrano to the bundle

    First add the following to your app’s Gemfile:

    group :development do
    ...
    gem 'capistrano'
    gem 'rvm-capistrano'
    ...
    end

    The last one nicely integrates capistrano with RVM.

    Install new gems:

    $ bundle

    Generate skeleton capistrano config files

    $ bundle exec capify .

    You should have Capfile and config/deploy.rb files now.

    Edit Capfile

    Make the file contents look like this:

    load 'deploy'
    load 'deploy/assets'
    load 'config/deploy'

    load ‘deploy/assets’ handles assets compilation in Rails 3. If you’re deploying a Rails 2 application just remove this line.

    Edit config/deploy.rb

    First, you should fill the variables with your application name, repository and web server name. Then find the commented out block of code that’s related to Passenger. Just uncomment it.

    Then make sure you have following lines in the file:

    require 'rvm/capistrano'
    require 'bundler/capistrano'

    ssh_options[:forward_agent] = true
    set :deploy_via, :remote_cache
    set :use_sudo, false
    set :user, "luna"
    set :deploy_to, "/home/luna"
    set :rails_env, "production"
    set :rvm_type, :system

    set :keep_releases, 3
    after "deploy:restart", "deploy:cleanup"

    namespace :deploy do
    desc "Symlink shared/* files"
    task :symlink_shared, :roles =&gt; :app do
    run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml"
    end
    end

    after "deploy:update_code", "deploy:symlink_shared"

    Allow capistrano prepare directory structure on the server

    $ bundle exec cap deploy:setup

    Copy the example database config file to the server:

    First create a config directory inside the shared directory:

    $ ssh luna@luna.com mkdir -p ~/shared/config

    Copy the file:

    $ scp config/database.yml.example luna@luna.com:~/shared/config/database.yml

    Now set proper values in database.yml on the server:

    $ ssh luna@luna.com vim shared/config/database.yml

    And deploy for the first time:

    $ bundle exec cap deploy

    Once you have the application code on the server log in there and prepare db structure:

    $ ssh luna@luna.com

    # The following happens in a remote shell

    $ cd current
    $ bundle exec rake db:setup

    Finally, deploy just to make sure everything works:

    $ bundle exec cap deploy

    Logrotate

    Create the /etc/logrotate.d/luna file with following content:

    /home/luna/app/shared/log/*.log {
    daily
    missingok
    rotate 30
    compress
    delaycompress
    copytruncate
    }

    That will tell logrotate to rotate log files daily, compress them, keep for 30 days and don’t choke when file is missing. copytruncate is important here as it will make sure the log file currently used by the Rails app is not moved but truncated. That way the app can just keep on logging without reopening log file.

    Don’t forget about this, if you manage production box yourself. And do it when you initially setup the box, not “later”. “Later” often means “when the app is down due to not enough disk space”. Srsly.

    Firewall

    Ubuntu comes with a decent firewall management tool called ufw. Install it:

    $ apt-get install ufw

    Now set the default firewall policy to “deny”:

    $ ufw default deny

    And allow connections to the services we want to expose to the world:

    $ ufw allow ssh/tcp
    $ ufw allow 80/tcp
    $ ufw allow 443/tcp

    Finally, enable firewall:

    $ ufw enable

    Your production environment is safer now.

    Mail server (MTA)

    There are many offerings for a SMTP service that also brings in some additional features like email opening tracking, link click tracking and whatnot. If you just need the plain “send message and forget” functionality you may use Postfix MTA.

    Install it by:

    $ apt-get install postfix heirloom-mailx

    Thanks to the firewall rules from the previous section you don’t need to worry about spammers using your server for sending their spam. They won’t be able to connect to your Postfix daemon from outside the machine.

    Monitoring

    For basic system monitoring the easiest thing you can do is to install monit:

    $ apt-get install monit

    Open /etc/monit/monitrc in an editor and adjust the default config to suit your needs.

    By default it monitors CPU usage, memory usage, disk usage and several other system-level components.

    If you’ve been using god for monitoring your app processes then you may consider using monit also for this task as it’s a much simpler tool for the job.

    That’s it!

    Great, you have now fully configured an Ubuntu server ready to serve your awesome Ruby on Rails application. I hope this tutorial made you realize that this task is not as hard as you thought. Now, after you went through all of this manually try building a set of Chef cookbooks that accomplish the above tasks automatically (and repeatably) for you.

    Lunar Logic is hiring!

    Risky intergalactic voyages take their toll. Astronauts get sucked into wormholes, meteors break off whole sections of space stations and the encounters with alien species are not always as friendly as we’d like them to be.

    That’s why we’re in need of fresh blood.

    We’re looking for Ruby developers with at least 2 year experience in programming that speak communicative English.

    Yes, that’s it. No fluffy marketing nonsense, no-nothing!

    Well, you can earn extra bonus points for Scrum experience (planning space walks), being involved in the community (contact with the aliens), open source contribution (tweaking the on-board instruments) and talks/presentations at events (promoting the idea of space travel).

    What awaits you? Breathtaking adventures, unforgettable memories and thrilling projects! Along with  flexible working hours, office in the very centre of Krakow and a rich range of extra benefits, including automated health system, free access to various sport and leisure facilities as well as available services of well-trained foreign language teachers.

    Web apps won’t make themselves. Check our jobs page for a more elaborate offer. :)

    Retweeter bot in action!

    The idea

    The are a lot of people in the Ruby/Rails community that I’d like to follow. However, in order to read their interesting tweets I’d also have to agree to read all the other things they tweet, and then I wouldn’t do anything else than read tweets all day.

    Basically, I want to read this:

    But not this:

    (sorry, Aaron…)

    What I need is someone that would follow all these people, read all their tweets and retweet only what seems important. This bot is my attempt at creating such filter.

    How it works

    The basic idea was that the best tweets get retweeted a lot, so I made the bot select tweets with a high number of retweets. Adding favorites improved things further, because a lot of tweets get many favorites but not many retweeets (especially some useful but not funny tweets from @ruby_news or @rubyflow – the funny ones get retweeted the most). I’ve ignored retweets of tweets by people outside the list, because almost all of them were off topic.

    Now I had most of the interesting tweets marked to be retweeted, but most of the top tweets were still not relevant – funny tweets about random things, tweets about politics, current news, Apple, Microsoft, startups, religion, etc. So then I’ve added a keyword whitelist – I went through the top tweets and I’ve prepared a list of keywords that would only match the tweets I’d like to see retweeted.

    I’ve also made the minimum number of retweets+favorites depend on the author – those with a high number of followers get much more retweets on average, so a post with 30 retweets by @spastorino (3871 followers) will usually be more interesting than a post with 30 retweets by @dhh (72141 followers).

    The end result is that even though some good tweets are ignored and some off topic tweets get retweeted (e.g. this Aaron Patterson’s tweet got through because the bot thought that the word “rest” was about REST), the filter works surprisingly well in most cases. It should retweet about 4 tweets per day on average, which sounds like an acceptable number. I’ll be checking the results from time to time and making tweaks to the keyword list and the algorithm to make sure the bot makes the right choices.

    Check out the sample below and follow @rails_bot if you like it. If you’d like to learn more about how it works (and maybe help me improve it), see the source code on GitHub.

    Enjoy your cleaner twitter feed!

    Tweets by @rails_bot{.twitter-timeline}

    New Lunar Logic blog!

    Liftoff! We have a liftoff…

    You can’t even dream of travelling to distant galaxies when all you’ve got is an old space shuttle. That’s why we’ve prepared the new Lunar Logic website. Our blog needed a serious overhaul, too.

    It isn’t just about the looks – we’re going to mainly speak about technical stuff now, as well as get excited about the events we went to. Add some news and a bit of fun to top it off.

    But there’s another reason for getting the new image. Lunar crew has just strengthened in numbers.

    Let’s cut the chatter. Meet Rafael.

    Rafael Caricio, our new developer

    Rafael Caricio is our newest recruit. This open source enthusiast flew directly to our office from hot Brazil. He seems to have friends everywhere – we thought he would need some time to familiarise with the new environment, but he seems like a good Ruby gem – seamlessly joining with the existing structures here in Krakow.  Well, he was a bit confused about the snow, but he suited up quickly to meet the cold :)

    Rafael is a busy man. Before coming to Lunar he started breeding tiny robots for GitHub Game Off 2012. As he put it:

    Me and my co-workers  decided to create a project where people would be able to program and have fun seeing some animations. The idea came from Robocode (the original Java version), but we wanted to create it from the ground up. And we did. One of us took care of the engine and the rest of us for the website. We talked to a designer and she accepted the challenge of designing the game website. The coding itself took two weeks. We’ve been very surprised by the amount of attention it received, so we’ve decided to go ahead with the project and work on a completely new version trying to archive the feature requests from our players. The current, rewritten version of the game is more robust and has Python in the backend.

    What is more? Rafael is an avid Python programmer and seems to be attracted by interesting events. DjangoDash? Node Knockout? Rails Rumble? You name it. He likes to work together with others and loves to share his knowledge. Have I told you that he’s busy? He helped with open-source Thumbor (loved by Square or Le Figaro), Provy, a provisioning tool for Python, alternative to Puppet or PIP, Python package management tool. And it’s just a beginning of the list of Rafael’s deeds.

    So, what’s next? Mr. Caricio has just found himself a nice flat and is still amazed by the snow and freezing -1 degree Celsius. Luckily, our office is prepared for nature’s onslaught – we’ve got hot tea and lemon.

    Worry not, Rafael!

    Sagrada Familia at Baruco 2012

    Barcelona is truly a beautiful city. Our lunar expedition was amazed by its thriving narrow streets, colorful nightlife and exquisite cuisine full of strange sea creatures. But, surprisingly, these weren’t the reasons we came here. We came here to attend BaRuCo, to listen about Ruby, talk about Ruby, and meet fellow Rubyists from all over the world. How was it then, you probably wonder? Read on!

    Marek: the first thing I noticed about the conference was it’s venue – a science museum. Whoever came with the idea of putting a bunch of nerds in a building full of bizarre contraptions demonstrating various laws of physics was a genius. I was eagerly awaiting the end of the talks each day just to see it all! Ok, it isn’t exactly true, because the talks were great.

    I especially liked the talks by Github’s Scott Chacon and Zach Holman, who presented ways of getting your work done more efficiently, respectively by solving the most basic problems better than before, and by automating every tedious developer task that can be automated. Paolo Perrota did a great job by humorously summarizing the history of software engineering and showing its impact on modern developers.

    Among the more technical talks I enjoyed ones by Gary Bernhardt and Xavier Noria most. The first one described the structure of modern web frameworks, ways of enhancing it, and pros and cons of every approach. The other one demystified the magic of Rails autoloading mechanisms.

    Most of the lightning talks were also very interesting, with topics ranging from zsh tips and tricks to a game of go to how perseverance is more important than talent.

    I really enjoyed the first Barcelona Ruby Conference, and I’m looking forward to the second one :)

    Phillip: BaRuCo was hosted in the CosmoCaixa museum in Barcelona, which made for a very interesting two days because when the talks were over there was still much to do, although unrelated to the conference itself.

    Baruco 2012 Macbook

    The conference started out with a very good keynote by Scott Chacon, co-founder of GitHub, with a topic that was not technical, but rather a point about getting work done and creating software that solves specific problems by getting back to the basic principles set forth by the company or project, respectively. The remainder of the day had many enthusiastic and passionate speakers, most notably Gary Bernhardt with his talk about deconstructing the usual controller in MVC to smaller more single purpose parts, Anthony Eden with his talk about the protocols used in the programming community, and Paolo Perrotta with his very humorous, very interesting look into the history of Software Engineering and how we have come to Agile methodologies.

    After the main speakers there was a chance for the attendees to give lightning talks that were time boxed to 5 minutes. These were very interesting, ranging from quick talks about helpful tips on apps to user, to a talk about the ancient board game Go, to a guy telling a story about hope. Let me elaborate on this guy and his story, as he reached his 5 minutes mark and was buzzed to stop, but received a wave of applause to keep telling his tale. Hope was the point of his talk, in that he was an average developer, like most of us are, and he made software that actually helped people and was a relative success. This story definitely resonated with me, and, I assume, most of the attendees.

    The second day had talks that seemed to be less enthusiastic but ended on a much stronger note than the day began. All in all, I think this was a very nicely organized conference with quite a few good talks. The only really complaints I have is that the wifi could not handle this many developers in one room, and at the beach party there needed to be more beer.

    It’s very refreshing to get out of the office once in a while, unglue yourself from that monitor and get some fresh air. You can’t code all the time, can you?

    All of us in the company think alike and, as the EuRuKo 2012 was coming near, we had set our sights on that conference. We struggled and won the battle for the EuRuKo tickets, braced ourselves and set off for Amsterdam, the seat of the event.

    EuRuKo 2012 entrance

    Our trip went on smoothly, the biggest group travelled by plane, some people arrived by car and we nestled in the bustling centre of Holland’s capital. We’ve managed to do lots of sightseeing before and after the conference and were very eager to attend it.

    The conference Thursday started with Heroku’s Hack Day on Rails, Rubinius and JRuby core during the morning and GitHub-sponsored splendid boat trip through the Amsterdam’s canals just to warm up before the very conference. Friday and Saturday were full of talks and lasted until late afternoon.

    Marek’s thoughts

    EuRuKo 2012 gates

    I have really mixed feelings about this year’s Euruko conference. It was organized really well, the venue was just gorgeous; there was only one area that disappointed me – the talks.

    And that’s really unfortunate, because, well, the talks are the most important point of such an event, and after the conference’ s first day I seriously considered trying another programming language – it was so uninteresting. The second day was a bit better, there were more technical presentations, but still I was expecting more of the event.

    Adam’s impressions

    This year’s Euruko was the biggest one I have attended to date (and perhaps the biggest Euruko yet). Like in Berlin the year before, the organizers chose a cinema for the venue, which is a great choice for a single track conference with over half a thousand attendees.

    The number of attendees was high, but the conference didn’t feel crowded. This is mostly thanks to how spacious Pathé Tuschinski, the cinema in which the event took place, is. It was also one of the very few Polish accents (the cinema was commissioned by a Polish immigrant in the 1920s). It’s a shame that there weren’t more of them, and especially that there were no speakers from Poland.

    EuRuKo 2012 Lunar Team

    Meritorically, the conference did not live up to my expectations. The first day was filled with barely technical talks, and even ones that had nothing to do with Ruby at all. The second day was more interesting, with great talks by Konstantin Haase and Charles Nutter (although I think I heard most of the JRuby talk once or twice before). Traditionally, the lightning talks were usually more interesting than the regular presentations.

    In my opinion, with Euruko growing more popular and larger every year, it would be good to reexamine its format. So far it has been a very “eyes-forward” conference, with almost no audience participation. I would love to see open spaces included in the schedule. I would also like the conference to have multiple tracks – even as many as four or five. I believe that the large number of attendees is a great feature of Euruko, but the single track format does not scale well to this conference size. I am looking forward to what Athens has in store for us next year.

    Kuba’s opinion

    The talk I liked the most was Geoffrey Grosenbach’s keynote on the second day about watching people code. He basically did a set of interviews with well known Rubyists and gathered a whole bunch of technical and more general tips about how to be a good programmer. Some were more obvious, some rather surprising (e.g. if your code is wrong, throw it away and start from scratch), but most were inspiring in some way.

    There were a few other good talks, but they were usually the non-technical ones or those not related directly to Ruby – how to make a good library, how to follow the Unix philosophy and apply it in your projects, or how to write maintainable frontend code. What I missed was a few more good technical talks on a more advanced level, where people would share their experiences with various approaches to e.g. creating APIs, designing app’s architecture or scaling apps – a few of those were on the list of proposals, but somehow they didn’t make it to the final set.

    EuRuKo 2012 Lunar Expedition

    Like the rest, I also had the feeling that the conference wasn’t as good as it could have been. Everything was great in terms of organization – the WiFi worked well and the venue itself was amazing – but the choice of the talks could have been better. It was a great idea to use GitHub pull requests for talk proposals, but maybe a better tool to gather community’s opinions and votes would have helped. Also, a lot of the talks seemed to have too little content for the assigned time – perhaps it would be better to timebox them to e.g. 25 minutes and have more talks this way? Judging by the lightning talks, a few of which were really good, it’s easier to make the talk interesting if you have limited time and you have to pick only the best parts.

    A word from Hania

    This was my first RoR conference, so, after having heard positive impressions from my colleagues, I’ve had big expectations. Unfortunately, I could hardly find any concrete and technical presentations and those few having these qualities were so mumbled out that I wasn’t able to follow them. I was anxious about the garbage collector presentation, yet it was very poor and I guess the majority of the audience was very disappointed. The second day turned out on much better, especially the Rubinius and JRuby presentations. However, generally speaking, the event confirmed my preconception that either someone is an attention seeker and does fancy presentations without much content, or somebody possesses huge knowledge, though unluckily lacks the skills needed to present it in an interesting way. That’s a pity, because the event could be much more inspiring.

    Marcin’s view

    Amazing city! Amazing venue! Really great time! When I think about this year’s EuRuKo I have only good memories in my head.

    Talks weren’t that interesting, true. Why? Because information in today’s world travels fast and we already could have been reading about many of things presented on EuRuKo on Twitter/Ruby Inside/etc…

    Role of the IT conferences in recent years changed (or should change). I’m not expecting to learn new stuff on them anymore. I’m expecting great atmosphere and a lot of smart people to talk to. And I can say it was exactly like this.

    To sum up: Good job Amsterdam!

    More photos of the event.

    You want to have your terminal sessions recorded and can’t find a man for the job? Hire ascii.io!

    I bet that your terminal has more than once witnessed and hardly withstood the grandeur of your code. You know, a moment of epiphany, when you truly understand the nature of things, become one with the universe and spawn little animals using only your thoughts.

    It would be ridiculous to keep such brilliant knowledge to yourself, wouldn’t it? Say hello to our little friend – ascii.io, the terminal scribe. It’s a brainchild of our seasoned Ruby engineers, Marcin “sickill” Kulik and Michał “Sparrow” Wróbel.

    ascii.io lets you record your terminal sessions and share them with other geeks simply by running the ‘asciiio’ command in your terminal. It is fully open-source with the aim of being a “go to” place for terminal users wanting to share their hackery.

    You can see it in “Vim colorscheme showcase” here:

     

    Ascii.io vim screenshot

     

    The terminal scribe is very self-reliant – it has virtually no dependencies on anything except Python (which is pre-installed in Linux and OSX) and it’s very skillful – the web based player is an implementation of VT100/VT102 ANSI terminal, supporting most ANSI sequences, all text attributes and 256 colors.

    ascii.io is lenient – you don’t need to create an account beforehand, but can do it after the recording if you want to claim your recorded sessions. It is also easily accessible, for it’s fully open source (both recorder and site/player) and everyone interested in building the greatest recording platform for hackers is welcome (source: github.com/sickill/ascii.io and github.com/sickill/ascii.io-cli). It is built from many parts written in Python, Ruby, CoffeeScript and bash so anyone knowing any of these languages can help.

    ascii.io has a quaint sense of humour – it beautifully plays Nyan Cat via telnet ;)

     

    Ascii.io Nyan Cat!

     

    or if you haven’t seen Star Wars in ASCII then here’s a trailer: http://ascii.io/a/8.

    But, it’s serious after all – it was officially released at wroclove rb conf in Marcin’s lightning talk.

    The terminal scribe is easy to work with: to install or upgrade ascii.io

    recorder, open a terminal and run following command:

    $ curl -sL get.ascii.io | bash

    (when using zsh you may need to run rehash after above command)

    That’s it! Now you can start recording your terminal sessions with:

    $ asciiio

    Enjoy!

    One of the practices espoused by kanban teams is to “make rules explicit.” However, after asking several times in forums and on Twitter “How do you make rules explicit?” without ever receiving an answer, I am inclined to suspect that many teams don’t, in fact, have a good way of capturing and sharing team rules. At Lunar Logic, we’re big fans of BVCs (Big Visible Charts) and our walls are covered with information radiators in the form of charts, graphs, and process lists. We’re also passionate about software quality, and so I’d like to share some of our QA process tools that serves as one way in which we make rules explicit in our teams.

    Ask most people what software quality means, and you’re likely to get an answer related to the absence of “bugs” in which bugs are usually defined as features that don’t work. Too many software teams primarily address this facet of quality by eradicating bugs. That’s all well and good, and the world would be better if we could find and fix the many bugs that live in our favorite software, but this is a woefully incomplete picture of software quality.

    The four dimensions of software quality that we’ve identified are these:

    Well-structured, cleanly-written code with good automated test coverage which is easy to work on and follows standard conventions and coding practices with clear style guidelines that are consistently followed.

    This allows new people to join the team or for a product to be handed off to new team easily. It makes it easy to add new features or to refactor code without fear of breaking existing functionality.

    Good design with an architecture that allows for efficient and appropriate scalability.

    Not every web application is going to have to support millions of users, but just in case the architecture should be such that migrating to cloud hosting or to a distributed delivery model shouldn’t involve massive refactoring.

    Excellent quality software is a pleasure to use.

    It’s not enough that the features work; they should work in a way that is intuitive and pleasant for the users.

    And finally… in high quality software the features work as intended.

    Without awkward workarounds or… bugs. This doesn’t just mean that the feature isn’t broken, but also that the need was clearly understood and appropriately addressed by the development team.

    Every project team has its own way of ensuring high quality in each of these four areas, although some practices are embraced by everyone in the company based on experience:

    all teams at Lunar Logic do pair programming and have peer code reviews on all commits. Architectural quality is reviewed periodically by having cross-team code reviews. Hallway testing new features and design changes helps to address usability issues early.

    We tailor new practices according to a particular product or environment. The important thing is that every team is thinking about standardizing a set of quality practices that maximizes software quality in all four dimensions so we don’t end up with a product that looks great, but doesn’t work, or works great, but doesn’t scale.

    What you might notice is that in no place in this article have I referred to software testers, or QA engineers. We have them, of course, and we highly value the perspective and skill set that such professionals bring to a team, but it’s important to remember that software quality is the responsibility of everyone on a software team, and team QA practices reflect this fact.

    Using the chart

    To emphasise the importance of other dimensions of software quality, at Lunar Logic we collect practices on a wall chart with four quadrants.

     

    QA graph example

     

    We print this chart on A0 paper (that’s a big poster size) and put it on the wall in a team room. Proposals for quality practices that come out of retrospectives are added using post-it notes and if they prove to be good ideas, they are written on the poster. These practices should be detailed enough to be consistently followed.

    For example, rather than “hallway testing” we might write “When a programmer has finished work on a feature, she asks someone who’s not busy to use the feature without guidance or prompting in an IE environment before the feature is marked as ready for a code review.” Making the rule very specific in regards to who does what and when makes it far less likely to be ignored or sloppily implemented.

    How do you make QA practices visible and keep them evolving?

    Here’s the QA Practices wallboard chart in case you’d like to use it!

    AirCasting logo

     

    Have you ever wondered how loud, exactly, is that noisy crossroads that prevents you from having a well-deserved sleep? Is it really comparable to a herd of starting jumbo-jets pursued by a swarm of fighter planes? How could anyone put up with this madness?

    Worry not – we’ve got a solution to your problem and it’s called AirCasting.

    AirCasting is an Android application that measures noise pollution. It’s a light, flexible and free-for-all Android app we created for HabitatMap.org with funding from Google’s Charitable Giving Fund of Tides Foundation.

    AirCasting sessions

     

    AirCasters measure sound levels, which they can choose to contribute to a crowd sourced map of noise pollution. This allows everyone to see the best place to either walk your dog, wind down and meet your mates or, simply, test your brand new protective earplugs.

    All right, all right, you may say. Buzzwords don’t impress me anymore. I would like some facts. How does it work in practice?

    Sound level measurement started in New York City. Its inhabitants complained about noise to a City hotline. The usual hustle & bustle affected their sleep, health and overall well-being. As a tool to justify their claims, AirCasting becomes a vessel for public opinion. Local authorities may dismiss one claim unsupported by any evidence, but what about hundreds of noise pollution reports made on the spot?

    AirCasting has been developed by three Lunar Logic pros: Paweł, Marcin and Grzester (he was testing the app). I’ve asked Paweł some specific questions about AirCasting.

    AirCasting sound graph

     

    Mirek: What does AirCasting do, in a nutshell?

    Paweł: AirCasting is a platform for sharing and visualising environmental data. Currently the only supported kind of data are noise levels. These can be obtained by users with their phones and then shared through our website.

    M: Which parts of the smartphone does it use?

    P: Most prominently we are using the microphone to gather noise level data. Other than that we are using Google Maps to visualise the data the user and others have gathered.

    M: Are there any planned extensions for the app?

    P: The app is planned to support a wide range of environmental sensors with which it will connect via Bluetooth. The one that’s being engineered right now is a gas sensor for measuring pollutant concentrations in the air.

    AirCasting has been live since 20th December 2011 and is available free from the Android market. The source code is available under GPL:

    https://github.com/LunarLogicPolska/AirCastingAndroidClient

    https://github.com/LunarLogicPolska/AirCasting