A planet of blogs from our members...

Caktus Group5 Scrum Master Lessons Learned

March 2018 marked the end of my fourth year as a Scrum master. I began with a Certified ScrumMaster workshop in 2014 and haven’t stopped learning since. I keep a running list titled “Lessons Learned,” where I jot down thoughts I find significant as they occur to me so that I can go back to the list and draw from my little bank of personal wisdom.

Some of the items on the list are practical (“Estimate size, derive duration!”), some are abstract (“Don’t let the process get in the way”), and some are just reminders (“Stop being resistant to change, let yourself be flexible”). They are the distilled product of my experience working with Scrum teams. Here are a few that I would like to share with you; I hope you will find them useful in your own path.

1. Learn about people

Learning about Scrum and Agile is essential to a Scrum Master’s formation. There are multitudes of books, blogs, podcasts, and other materials available to fulfill that need. However, a more well-rounded curriculum includes learning about people. After all, the Agile manifesto begins with “Individuals and interactions over processes and tools.”

The Scrum Master role is about dealing with people and how they communicate and work together, more than it is about process and process frameworks. Without understanding people and the nuances of their interactions, how can a Scrum Master be an effective servant leader?

I suggest reading about teams, leadership, management, psychology, and anything else that might give you insight into people and how they work. Here are some examples:

  • The Five Dysfunctions of a Team by Patrick Lencioni is an excellent place to start. It is an enlightening introduction to the roots of the problems you have likely observed on your own team, and gives some practical ideas for how to address them.
  • If you work in software development, chances are you have some introverts on your team. Quiet: The Power of Introverts in a World That Can’t Stop Talking by Susan Cain will help you understand what it’s like for those individuals to work on a team and how you can help them.
  • Drive: The Surprising Truth About What Motivates Us by Daniel H. Pink gives great insight into the intrinsic motivation factors of autonomy, mastery, and purpose that align neatly with the Agile values.

2. Buy sticky notes - lots of them, all the colors

Not just sticky notes: index cards, colored markers, sticky dots, funny stickers, cork boards, whiteboards, magnets, flip charts, tokens, butcher paper, the list goes on. If a team is co-located (even temporarily), physical exercises will come in handy. I don’t mean jumping jacks, but activities where everyone is actively participating instead of watching one person move virtual cards around a virtual board on a screen.

The team will feel more engaged and involved if they are standing, moving around the room, physically doing the planning, or the writing, or the moving of cards. I have observed this firsthand in activities like user story mapping, user story writing workshops, sprint planning, retrospectives, and daily standups.

The act of writing on paper can be much more powerful than typing and can be more easily displayed publicly. A team’s Definition of Done should be visible and obvious - write it on a flip chart sheet and hang it up on the wall of the team room (better yet, have the team write it together). The act of writing will help them remember it, and help them own it.

A physical burndown chart that team members update every day as part of standup brings everyone’s attention to it, where they might not think to go look at their digital tool. Have fun with it too - one team I work with uses emoji stickers to mark tickets that will require lengthy QA time.

3. Don’t force it

It may be intuitive for many people who find themselves in the role of Scrum Master to try to make development teams conform to Scrum, or make organization leaders see that they need Agile in their lives, or make managers understand how they should interact with the team, or make their team adopt new engineering practices.

This type of “command and control” approach is not compatible with the Agile mindset, and can be extremely detrimental to fledgling teams (even more so to ones who have already been working in Agile), who will chafe at being told what to do and react negatively. It’s also going to be frustrating for you when it doesn’t go your way - and it won’t.

Instead of trying to force the results you want, first examine your reasons for wanting those results: is it because it’s in the best interest of the team, or is it just what you want? Then consciously let go of what you want, even if it’s what you think is best for the team.

Start asking questions: Why isn’t the team paying attention to the sprint burndown? Ask them instead of becoming frustrated when they aren’t heeding your daily reminders to stay aware of the sprint’s progress. Maybe it’s hidden away behind some easy to miss menu in their digital tool, or maybe they don’t feel ownership of the sprint work enough to care about its progress. Why aren’t team members practicing pair programming daily, when you have repeatedly made the case for its usefulness? Maybe your team is composed of introverts who are uncomfortable sitting close to others and talking out loud for extended periods of time, and feel they produce their best results when they are allowed to achieve a state of flow in isolation and privacy.

Ask the team, instead of trying to come up with the answers on your own. It’s easy to think or assume you know what the other party’s motivations are, but the only way to know is to ask. Once you understand the reasons why by asking the right questions, you can begin addressing the root cause in a way that will truly help whoever you’re working with achieve their goals, not the results you want from them.

4. Curb your inner helicopter Scrum Master

Let the team fail and recover on their own instead of swooping in with advice or corrective action at every sign of danger. Not letting the team make mistakes seems intuitive - after all, you are partially responsible for their success, and failure may reflect negatively on your work with the team.

It is the responsibility of the Scrum Master to ensure that impediments are removed, and you may see future pitfalls as impediments in the team’s way. However, it will be more beneficial for the team in the long term to help them learn how to identify those dangers and take action themselves, rather than relying on you to constantly be on the lookout in their stead.

It is a core concept of the Agile mindset that learning from mistakes is more effective and valuable than learning from success. Instead of preventing mistakes and failures, ensure that the team has a safe environment to make mistakes, where failures are low-risk and low-impact:

  • You can foster a culture of trust where the team will not be afraid of ridicule and repercussions for making mistakes.
  • Working in short iterations means that, if unsuccessful, one sprint won’t be likely to sink the whole project.
  • You could encourage the use of testing environments where experimentation can be carried out safely without impacting the live product and its users.
  • Continuous integration and deployment practices make implementing and testing small changes to the code effortless, and help lower risk at the time of release.

Don’t attempt to solve or prevent all of the team’s problems for them like a helicopter parent might “hover” over their children, even if the solution is obvious to you. Instead, let them make the mistakes, and ensure that they can learn from them and use that knowledge to improve as a team and prevent future mistakes.

5. Know what success looks like

When a team is first formed or adopting an Agile framework for the first time, they will likely need the Scrum Master to guide them through everything, from facilitating every meeting to removing every impediment. A good Scrum Master can shine in these moments, jumping at every call for help and doing everything they can to see their team through difficult situations.

It feels great to be needed and depended upon for your expertise, and there’s a lot of career advice that emphasizes the benefits of making yourself indispensable to gain recognition and job security. But is it actually a good thing when a team that’s been working together for months or years continues to look to their Scrum Master for help at every turn?

I believe that the best sign of a Scrum Master’s success is that their team no longer needs them. This will mean that he or she has set their team up to be independent, self-organized, empowered, and striving to continuously improve without being pushed to do so. This isn’t going to happen overnight, and it will require careful consideration about whether the team is ready to take over the responsibilities that the Scrum Master has been fulfilling.

A good way to pilot this is to just not show up and see what happens. Don’t attend every standup, miss a sprint planning or retrospective every now and then, or even take a vacation and don’t worry about having anyone fill in for you. Did the team keep functioning normally? Maybe stay silent during a conflict. Did the team resolve it without your input? If yes, then you have achieved success: your team no longer needs you.

So what now? You don’t have to dust off your resume quite yet. The team may still need your assistance in some cases, such as removing organizational impediments that are outside their sphere of influence, or individual team members may still need coaching. You may be called upon to see them through some major changes, or help them kick off a new project.

You will also expand your efforts working with others in the company, such as managers and executives, to help them create an environment where the development teams can continue to flourish. Check in with your team at regular intervals - even if they don’t need you, they may still want you around!

Caktus GroupCaktus at PyCon 2018

We’re one month away from PyCon 2018 and are looking forward to this year’s event in Cleveland, OH. Caktus is proud to sponsor once again and will be in attendance with a booth.

Caktus Booth

Building and renewing contacts in the Python community is one of our favorite parts of participating in PyCon. Stop by our booth May 10-12 to talk about Python and your next custom web development project, plus swag, games, and giveaways.

We have two Raspberry Pi 3 kits to give away to lucky winners. All you have to do to enter is take a quick survey at our booth and leave your email address so that we can contact you if you’ve won.

Some of you may remember our Ultimate Tic Tac Toe game from last year. Since then, our developers have been hard at work improving the AI and transferring it to a Raspberry Pi. We only had a couple of champions last year. Will you beat the game this year?

Kurtis, the winner of last year's Ultimate Tic Tac Toe game.

For those attending the PyLadies auction on Saturday, May 12, a gorgeous scarf will be up for grabs. Hand-made by local Durham weaver and fiber artist Elizabeth Chabot, this piece in Python colors will let you show off your love for the language in style.

Talks

One of the reasons our team loves PyCon is the opportunity to keep skills sharp and learn from the range of excellent talks. This year they’re excited about:

Some of these will likely appear in our annual PyCon Must-See Talks series, so if you can’t make it this year check back in June for the attendees’ top picks.

Job Fair

Are you a sharp Django web developer searching for your next opportunity? Good news - we’re hiring! View the spec and apply from our Careers page. We’ll also have a table at the job fair, so come meet the hiring manager and learn more about what it’s like to work at Caktus.

Don’t be a stranger!

Come say hi at the booth, look for members of the Caktus team in our new hoodies, or set up a meeting in advance to schedule a dedicated time to meet.

The new Caktus hoodie, in teal with a white logo.

Whether you’re at PyCon or following along from home, we’ll be tweeting from @CaktusGroup. Be sure to follow us for the latest updates from the event.

Hope to see you in May!

Caktus GroupAgile for Stakeholders

In Agile development, a stakeholder is anyone outside the development team with a stake in the success of the project. If you are a stakeholder, knowledge of Agile will help you understand how the project will be developed and managed, when you can expect to see progress, and what the team needs from you in order to deliver their best work. Understanding these basic concepts and what your role entails are essential to your project’s success.

What is Agile (and why should you care)?

Agile was invented as a set of values and principles to guide software development teams in adapting to change and acknowledging unknowns. In development, an enormous amount of time and energy can be spent on managing change: changing expectations, changing market landscapes, changing requirements, and changing knowledge of the work.

Since change is a constant, it makes sense to build a process that takes it into account as expected. Agile is an iterative, incremental approach to software development and delivery that allows for uncertainty and change.

There are many methodologies, practices, and processes that fall under the “Agile umbrella.” For example, you might have heard of Scrum, Kanban, user stories, or sprints. These may or may not be used by the development team you work with. You should feel free to ask about them if you are curious about the team’s internal workings, but none of them are necessary to understanding the gist of what Agile is and how it works.

Why Agile?

Agile was introduced as a reaction to “waterfall” development, where work is done in long, consecutive phases of requirements gathering, analysis, design, coding, testing, which can each last weeks or months. While there is nothing inherently wrong with this approach, it does present significant challenges.

Time to market

If you want to launch your software in a competitive market, you may need to assess whether spending years on development before being able to release anything will be viable for your business. During that timeline, it’s possible that you will be outrun by your competition, or that the market will change in such a way that your product will no longer be cutting edge, or even relevant at all. Technology changes quickly, and so do consumers’ needs and expectations - you will need to be able to keep up.

Running out of time

Imagine that your waterfall project deadline is fast approaching. It’s likely that development is either in the coding or testing phase. If the work is running behind schedule and that deadline can’t be pushed out for business or budget reasons, either scope will have to be cut during the coding phase, or the development team will have to burn budget in scrambling to get the initial scope implemented in time.

The testing phase might also be cut short, leaving little time to test the software, identify and fix defects. All aspects of the project suffer in this case and it is likely to result in a low-quality product that will not meet your customers’ needs.

Measuring progress

In waterfall, working software isn’t produced until the coding phase has completed (relatively late in the overall development schedule). This makes it difficult to measure progress and know if the project is on schedule, or how close it is to completion. You could be more than halfway through your timeline and have nothing more to show for the time and money spent than documentation of requirements and designs. If the project runs out of budget at this point, there is no part of the software that is usable and your investment is wasted.

Increasing risk

In traditional development, risk only increases as the project progresses because the work cannot be validated, from a technical and from a business standpoint, until the last phase of development. If any major problems are uncovered in the testing phase (such as issues with the basic architecture of the app), it will require significant rework.

The rework might entail going back to the beginning phases and revising requirements and designs, then refactoring code. This will have a major impact on the project budget.

Change requests

The waterfall approach to software development does not support responding to change quickly or efficiently. If any changes to the requirements are brought up during the requirement gathering phase, they can probably be incorporated fairly smoothly. However, the farther along the project is, the more complex and time-consuming it will be to make any changes.

Waterfall relies heavily on rigid requirements because they have to be handed off to a design team, who will then pass the designs to the coding team, who will then hand off software to a testing team. Any need for changes to the end product requires a change request going through each team in turn, which will take more time the farther along the project is.

All of this does not mean that development can’t be done in this way. Waterfall has become somewhat of a dirty word in development, but this is not necessarily warranted. Some types of development work are fine done in long, consecutive phases with delivery at the very end, if there is no uncertainty about the work and if the capacity and capability of the development team(s) is a stable, known quantity. However, these ideal circumstances are rather rare. This is where Agile can help.

Agile in Practice

Since the concepts of Agile are generally abstract, it can be a struggle for anyone unfamiliar with this approach to understand how it works and why it matters. As a client, you might begin to ask yourself why any of this is relevant to you; if this is the way that development teams need to work, then great - they should do that! But your role and participation as a stakeholder are vital to the success of this approach.

This section provides an overview of how Agile development works in practice and what you can expect, as well as what the team will expect of you.

Step 1: Break it down

When development is cleared to begin (generally after some initial discovery work), the first step for the development team will be to break down the work into small chunks. While there will still be many unknowns at this point, this is a good place to begin.

Once the team has enough information to get started, they will generate a list called the “product backlog.” Each item in this backlog will represent some small piece of functionality for the product, such as a user’s ability to perform a specific action (e.g., logging in). These small pieces are what will allow the team to implement features in an incremental, iterative way.

For you as stakeholder, this step can include participating in a discovery workshop; story mapping activities; and discussing project vision, goals, and strategy. The purpose of this early collaboration is to reach a shared understanding of what the team will be building and why. They will have questions for you about initial scope, specific features, content strategy, and more. Alignment between you and the team at this stage is what will start the project off on the right foot.

Step 2: Estimate everything

Once the backlog for a new project has been created, the developers will estimate each backlog item. This step is somewhat optional depending on the nature of the project and on the team’s established processes. The estimates will help them (and you) understand how much work each item will be to implement relative to the other items in the product backlog. Most Agile teams use a point system to do this.

As stakeholder, you may have visibility into those estimates, which will help you give input on prioritization decisions throughout the project. It’s also important to remember that estimates are just that - estimates. They will be imprecise (and sometimes inaccurate), but they will be updated and refined as the team progresses through the work and accumulates knowledge.

Step 3: Prioritize, prioritize, prioritize

The product backlog isn’t ready to be worked on until it has been prioritized. Prioritization will be based on multiple components, including business value, estimates of effort, and various risk factors. It’s important to note that the reason the backlog is one unified list is that the priorities will be ordered from top to bottom: each item in the backlog is a higher priority than the one directly below it. This means that no two items can be the exact same priority, purposefully forcing some tough decisions.

The team’s product owner (PO) is responsible for maintaining the backlog, ensuring that it is clear and accurate. The PO will need your help, however, to understand the details and value of the backlog items. He or she is likely to ask for your input on high-level feature priorities and will ensure that the backlog is prioritized correctly to make the most use of development time.

Step 4: Start building

Once the product backlog is prioritized, the developers can begin implementation. They will pull items from the top of the backlog only. The most important work is always done first, saving less important work for later in the likely event that the team does not get through the entire backlog before time or budget runs out.

When the team selects a backlog item to work on, it will go through multiple phases in quick succession, such as analysis, design, coding, testing, and validation. While this sounds very much like the waterfall phases outlined above, the difference is that each backlog item moves through these phases individually and relatively quickly thanks to their small size.

As stakeholder, you will be kept up to date about what the team is working on and when you can expect to see new functionality.

A note about sprints

You may hear the development team refer to sprints, or say that they work in sprints. A sprint, or an iteration, is a timebox in which the team completes a set of backlog items. Not all Agile teams work in sprints and sprint length varies depending on the team, between one and four weeks.

At the beginning of a sprint, the team identifies high priority work from the backlog that they can complete in that timeframe and commits to getting it done. Once the sprint has started, it’s important for current priorities to remain stable, meaning that the work pulled into the sprint can’t be switched out for other work.

This allows the development team to focus on finishing a set of backlog items without interruptions or distractions, also limiting work in progress for efficiency. The backlog priorities can still be updated at any time, until the beginning of the next sprint.

Step 5: Review and feedback

Once the team has completed an item or a set of items from the product backlog, the work will be presented to the stakeholders for review. This is usually a live demo, but can also be just a notification that new functionality is available for you to look at yourself. You can expect that, unless noted otherwise, the completed work is fully tested and functional.

If the stakeholders are happy with the work, great! The team will move on to the next items on the backlog. Otherwise, any requested changes are entered into the backlog as new items and prioritized along with everything else left to do.

Reviewing completed work and providing feedback is your most important responsibility as a stakeholder. The team needs to know if they have built the right thing, whether it matches your expectations, and confirmation that they are heading in the right direction.

The team also needs your negative feedback. It’s always nice to hear what you like about what they have built, but it’s important for them to hear what you are unhappy with in order to course-correct and improve. (Check out this post for some of the techniques we use to gather your feedback.) Early and regular feedback is crucial to the Agile approach.

Evolving the backlog

The backlog changes constantly. Items are added, deleted, rewritten, re-estimated, and reprioritized. The backlog is a living artifact that is updated as work is completed, feedback is gathered, new information is acquired, new knowledge is gained, and new ideas are generated. It becomes a sort of wish list, rather than a set of rigid requirements.

As stakeholders, you can always request that new items be added to the backlog. Be prepared to answer questions about how important those new items are to you in comparison to the others. Remember that adding something to the backlog means bumping something else down in priority.

Step 6: Adaptive Planning

In order to predict when the project will be ready for a release or another milestone, the product owner will create a plan based on the pace of development and how much remains in the backlog. The product owner will use this plan to forecast either a date by which a determined scope (set of backlog items) can be completed or how much scope can be completed before a determined date.

If the forecast shows that the desired scope can be completed for the desired date, then no changes are required. If things change along the way, the plan is updated to reflect the changes, either by decreasing scope or pushing out the date.

Although you will have visibility into the plan early, it’s important to remember that it will inevitably change. Some flexibility in scope or time is absolutely necessary for any development team to deliver high-quality work.

The Bottom Line

The point of Agile is to start by building something small and simple, validating early and often that it’s the right thing or heading in the right direction, and then iterating by improving on it or adding more to it. This is in opposition to more traditional approaches in that you as stakeholder don’t have to wait until the end of a project to see the work. You are part of the process; you have visibility into real, measurable progress. You can change your mind as you also learn about your product and its users along the way. By playing an active role in the process, you can help ensure your product's success.

Caktus GroupShipIt Day Recap Q1 2018

Another quarter, another ShipIt Day! Take a look at what our team dove into in the first part of 2018.

Digital linguistic resources

Neil recently discovered there has been work done to create digital resources for his favorite language, Coptic. The database is a collection of normalized text that has linked words that point to dictionary entries. He wanted to branch off the projects that exist, get his own project going, and make improvements to the interface.

Using Elasticsearch to process the data files and index them resulted in a huge collection of XML files, one for each letter in the Coptic alphabet with additional spelling and grammatical information. Neil used Postgres and Adjacent fields to store this data, then hooked it up to a search interface. He also set up a Digital Ocean droplet to host everything.

In the process, he also found out that he indexed 501 lexical entries — about all the droplet can handle. In future, he’ll work toward an improved version of the dictionary.

Learning React

As part of efforts to standardize the front-end tech stack, Kia worked on learning more about React. Although tutorials with real-world examples were difficult to find, she was able to think of situations in which React would come in handy. Its power really lies in modular small pieces you can chain together and create neat user interface experiences.

Kia looks forward to introducing that to some potential projects in the future and will continue learning React on her own time.

Mark also spent some time learning React and built a puzzle game, similar to a handheld number tile puzzle. He liked a tutorial which spoke more to React being a JavaScript tool and not needing a build system or fancy syntax. It walked you through building React components for someone already familiar with writing JavaScript, which he found to be a useful reference.

Get the code for the game on GitHub.

Brython

Dan was curious about Brython, a Python 3 implementation for client-side web programming which lets you write Python that runs in the browser. He decided to build a replica of the mobile game Flow Free using the tool. He did all the logic in Python, which was familiar and easier for him than the usual.

Scrum Trouble board game

Gerald wanted to come up with a creative way to incorporate some of the principles of Scrum and the things we see in sprints on a day-to-day basis. The result? The Scrum Trouble Board Game!

A few of the cards from the Scrum Trouble board game.

His game adapts Trouble and Exploding Kittens with game mechanics like Sabotage (things that can go wrong), Action (actions to overcome sabotage), and Generic cards (perform no action but can be combined with other cards to gain action cards from other players).

It emphasizes the importance of QA and testing, and enables learning of Scrum principles and Agile thinking in an engaging way.

Neural network image classifiers

Calvin explored neural network image classifiers, using an intro to wraparound tensorflow to follow a, “Is this a cat or dog” tutorial.

He found it straightforward to run convolutional network analysis and then have the wraparound do the math. After that, he adjusted the layers and started the training. Calvin built a command line tool that separates the images into their appropriate categories, which he feels went well and plans to keep iterating on it for improvements.

As part of the process, Calvin also made the tutorial more generic to increase flexibility, for example, to use any animals and not just cats or dogs.

Arduino

Inspired by Ned Jackson Lovely’s talk at Pycon 2014, Scott worked on getting a remote control helicopter to fly using an Arduino and Python code. He got the LEDs to blink and then got it to fly!

In the process, he found the Arduino is a great way to do embedded programming because it makes it super simple to transfer code from your computer. There was already an existing Python library for this helicopter, making it an ideal project to test.

Mapping user experiences

UX designer Basia read Jim Kalbach’s book Mapping Experiences and was inspired to think about how the techniques of mapping user experiences are applicable to the work we do here at Caktus.

In order to map experience at the level of applications we build for our clients, we conduct user story mapping. However, if we think about what we do as helping our clients deliver value to their users, we also need to consider mapping user experience in terms of finding value alignment and adjust it accordingly.

Customer journey mapping.

Three maps that could empower us to find more value for our clients include:

  • User experience map
  • Service Blueprint
  • Customer (or User) Journey Map (CJM)

Each maps the exchange of value in a different way and could provide additional insights for our clients.

Tequila conversion

Dmitriy worked on converting the Caktus website from Margarita to Tequila. He successfully got part of the way through. In the process, he thought of some suggestions for improvements to the documentation, including some formatting changes. Dmitriy also found some things to improve on the Caktus website that he will implement on as part of ongoing improvement work.

Redmine project board

One of the Caktus development teams uses a physical board to track projects and progress. However, it can be hard to keep track of all of tickets when working remotely.

Phil sought to create a board using the Redmine, JIRA, and Vue.js APIs. JIRA doesn’t allow cross origin (CORS) API calls, but he was able to make a UI board with blue stickies. It currently has no moving functionality, but you can enlarge the ticket so it is more readable. He’s looking into a workaround for the problem with the JIRA API.

He would eventually like to add functionality, including making comments, assigning tickets, and moving tickets.

Diversity and inclusivity in the hiring process

As part of Caktus’ ongoing hiring efforts, Liza worked on improving the diversity and inclusivity of the hiring process by testing Textio, an augmented tool for writing job descriptions. Textio analyzes job location and industry/field as well as the language of the job description to make recommendations on word choice, tone, and structure. The tool is best known for helping companies develop more engaging job descriptions with consistently balanced and inclusive language, thereby attracting more diverse talent.

Improving skills

Charlotte started reading a book called Coaching Agile Teams, while Robbie studied for the ISTQB software testing certification. Jeff read High-Performance Django while helping out with deployment issues on other projects.

Show me more!

To find out what we've done for past ShipIt Days, see our other blog posts.

Caktus GroupWhen a Clean Merge is Wrong

Git conflicts tell us when changes don’t go together. While working with other developers, or even when working more than one branch by yourself, changes to the same code can happen. Trying to merge them together will stop Git in its tracks.

Conflicting changes are marked in their files with clear indicators to show what changes Git couldn’t figure out how to merge on its own. Current changes are shown on the top and the changes to merge in are shown below.

Changes in a Git merge.

When the merge does not have any conflicts, everything is fine and you can move on with your day.

...Right?

This was just an example, but here’s another set of changes from two branches I made recently. In one branch I was sorting a sequence of templates:

A code block sorting a sequence of templates.

In another branch I was adding an “Introduction” page at the beginning of the same list of templates:

A code block showing the addition of an introduction page to a list of templates.

Both of these branches were merged to the mainline branch. I expected them to have caused a conflict, but they didn’t. Git decided it could figure out the order in which I wanted these two lines added to the same place.

A code block showing the effect of the combined merge.

It might be clear from this GitHub diff what’s wrong with the way Git merged the two changes together. First, I’m inserting that new page to the beginning of the list. But second, I’m sorting that same list so the new page is no longer at the beginning.

The bug, caused by a merge that looked clean, had to be fixed in yet a third pull request (PR). That’s something I want to avoid in the future and, thankfully, that’s actually pretty easy with some forethought.

The simplest protection? GitHub’s Protected Branches feature. I can turn this on in the Settings section of the repository, in the Branches section.

Menu item for navigating to GitHub's Protected Branches feature.

I want to protect the develop or master branches. I also want to control all the PRs that merge into it. First, add the branch to be protected.

Selecting a branch in Protected Branches.

Next, enable three settings:

  • Protect this branch, enabling branch protection
  • Require status checks, enabling conditions that have to be met before a PR can be merged
  • Request branches to be up to date, making one of those conditions be that all the PR have the latest changes from the upstream branch merged into it, first, before it can be merged and closed.

Setting up branch protection.

This will stop anyone from merging a branch in the future when it hasn’t been updated so that you can get a chance to see the results of the merge before you actually push it upstream.

There are more options you can enable that can give you even stronger safety nets. GitHub can run the test suite automatically using a continuous integration (CI) service like TravisCI or CircleCI. GitHub does their best to make the process painless to set up. CI integration will run the whole test suite when someone creates a PR, when it gets updated, and when branches are merged, GitHub won’t let you merge a PR if the CI hasn’t given it the green light. This may slow down workflows, but it is worth it to know the right things are being merged safely and can save you time in the long run.

Of course, it won’t do everything. Once a branch is updated with the latest from a master or develop branch, a safety checklist should be followed:

  • Have an extensive test suite and be sure that any new changes or additions to a branch are covered by new or adjusted tests.
  • If behavior is added or changed, update tests accordingly to ensure changes are verified when updates, merges, and future changes could break them.
  • Check the project after merging, even with a quick smoke test. Don’t assume changes that looked fine on a branch won’t break merged. Look again.

Developers rely on a lot of tooling. Sometimes tooling fails and some of those times more tooling is actually a good solution (like GitHub helping protect us from common Git mistakes), but don’t forget the human solution of simply being more vigilant.

One last note: Protected branches can be great for small teams, where the team is likely to only have a handful of PRs open at any one time. For a larger team, it may become burdensome that every PR needs both updated and CI run again if the number of PR open (and thus affected by every merge) is much larger. In this case, teams may need to find ways to coordinate better or other tooling options that could work better in those situations.

Read more posts by Calvin on the Caktus blog.

Caktus GroupWhat is Software Quality Assurance?

A crucial but often overlooked aspect of software development is quality assurance (QA). If you have an app in progress, you will likely hear this term throughout the development life cycle. It may seem that coding is the brunt of the development work, since without code your app doesn’t exist, but quality assurance efforts often consist of up to 50% of the total project effort (1) (and part of the QA effort is coding). Without quality assurance, your app may exist but it is unlikely it will function well, meet the objectives of your users, or be maintainable in the future. QA is important, but what exactly is it?

QA factors

Software quality assurance is a collection of processes and methods employed during the software development life cycle (SDLC) to ensure the end product meets specified requirements and/or fulfills the stated or implied needs of the customer and end user. Software quality, or the degree to which a software product meets the aforementioned specifications, comprises the following factors as defined by the ISO/IEC Standard 9126-1: functionality, reliability, usability, efficiency, maintainability, and portability. The following sections will go over what these factors are in more detail, and how quality can be assessed for each.

Functionality

Functionality, as an aspect of software quality, refers to all of the required and specified capabilities of a system. High quality is achieved in this aspect if implemented functionality works as described in the specifications. Arguably, you could have a software product with high functionality that does not have any of the remaining aspects and is still useful to some extent. The same cannot be said for the other quality assurance factors.

The key to ensuring correct functionality in a software product is to start specifying functionality early, in the discovery phase. Requirements need to be teased out, defined, and recorded. This can be done in a discovery workshop or other forms of requirements gathering, and will continue to occur throughout the SDLC. Requirements often change throughout a project, and it’s important that any changes be documented and communicated to all parties.

With documented specifications, functionality can be assessed during development with white box testing techniques like unit tests or subtests and black box testing techniques like exploratory testing.

At Caktus, white box testing is primarily handled by our developers, while black box testing is the domain of our Quality Assurance Analysts. Functionality assessment occurs in every step of the development process, from initial discovery to deployment (and future maintenance).

Reliability

Reliability is defined as the ability of a system to continue functioning under specific use over a specific period. In order to assess reliability, it’s important to identify how the software will be used early in the development process. How many requests per second should the app support? Do you anticipate large spikes in traffic tied to scheduled events (e.g., beginning of school year, end of fiscal year, conferences)?

Expected usage can inform the technology stack and infrastructure decisions in the beginning phases of development. Reliability testing can include load testing and forced failures of the system to test ease and timing of recoverability.

Usability

Usability refers to whether end users can or will use the system. It’s important to identify who your users are and assess how they will use the system.

Questions asked and answered while assessing usability are: How difficult is it for users to understand the system? What level of effort is required to use the software? Does the system follow usability guidelines (e.g., comply with usability heuristics and UX best practices, or adhere to a style guide)? Does the system comply with web accessibility standards (e.g., Web Content Accessibility Guidelines or Section 508)?

Conducting usability testing with end users helps uncover usability problems within the system.

Efficiency, maintainability, and portability

Software efficiency refers to the measurement of software performance in relation to the amount of resources used. Efficiency testing evaluates compliance to standards and specifications, resource utilization, and timing of tasks.

Maintainability refers to the ease with which the software can be modified to correct defects, meet new requirements, and make future maintenance easier. An example of poor maintainability might be using a technology that is no longer actively supported or does not easily integrate with other technologies.

Portability refers to the ability to transfer the software from one tech stack or hardware environment to another. The requirements for these three aspects should be discussed by project stakeholders early in development and measured throughout development.

Important notes about quality

The above quality characteristics (functionality, reliability, usability, efficiency, maintainability, and portability) must be individually prioritized for each project, as it is impossible for a system to fulfill each characteristic equally well. Focusing on one aspect may mean making decisions that negatively affect another (for example, choosing to use technologies that make a product highly maintainable may make it much more difficult to port). Frequently, a specific product requires a very narrow focus on one aspect; a tool that has a very small number of users only needs to be usable for them, not the whole gamut of humanity.

Target quality for a product should be discussed among all stakeholders and agreed upon in writing as early as possible. This quality agreement should be stored somewhere easily accessible by all team members and referenced frequently during execution of quality assurance tasks.

There’s an unspoken tenet of software development that says no product can be defect-free. In order for software to be successfully developed and deployed into the wild, it’s important that all parties acknowledge this.

Striving for perfection and a 100% defect-free app will waste time and resources, and ultimately be futile. Similarly, it’s important to recognize that the absence of identified defects does not indicate a product is defect-free; more likely, the absence of defects indicates the product has not been thoroughly tested.

The goal of quality assurance is not to ensure there are no defects in the software, but to ensure that the agreed upon quality level is met and maintained. You should expect that some known defects will be low priority and not fixed before deployment. Additionally, you should expect that some defects will be very high priority and required to fix prior to deployment. Priority of defects should be determined by a combination of the quality agreement, severity of the issue, and stage in the SDLC. We’ll go into more details regarding prioritization of defects in a later post.

References

(1)Andreas Spillner, Tilo Linz, and Hans Schaefer. Software Testing Foundations, 4th edition.

Caktus GroupManaging Sprint Reviews for Multiple Clients or Projects

Sprint reviews for teams working with multiple clients and managing multiple projects can be a challenge. At Caktus, we combine more traditional sprint review guidelines with some tweaks to fit our company and client’s needs.

Meeting preparation

The morning of the sprint review, our Scrum Master shares the sprint goals with the stakeholders. This reminds stakeholders what we were working on and allows them to decide if what we are reviewing is relevant to them.

Before the sprint review meeting, the team gets together to determine the presentation order and who will present what. As the product owner (PO) for my team, I go through each of the sprint goals, organized by client, and we discuss what will and will not be presented.

Meeting structure

If there are no external stakeholders, the meeting is time-boxed and follows the general flow below to keep things organized and moving forward:

Starting the meeting

The product owner starts the meeting:

  • Sets the stage
  • Introduces attendees (when necessary)
  • States what will and will not be demoed from the sprint

Presenting work

The team presents done work on staging/production, or work that is in QA but not yet completed if there is value in getting feedback on it at this point. Incomplete work is presented with the caveat that it is not done; we do not share any work that is solely on a developer’s local environment.

  • Team members demo the work, individually or jointly
  • The presenting team member discusses any applicable key events, major challenges, and solutions
  • The PO asks for questions and feedback from stakeholders, recording it for later prioritization in the backlog

Discussion of the backlog

Once the demo is complete, the PO leads discussion of the backlog:

  • Review the next highest backlog priorities and projections/release plan (if appropriate)
  • Solicit opinions on those priorities
  • Take into account feedback from the sprint review and re-evaluate the backlog for next sprint planning

When these meetings consist of internal stakeholders or a single client, we go through this script once.

In the cases where the team is working on projects for multiple clients, we break our meetings into half-hour or one-hour chunks. We then go through this script with each client, discussing only their pertinent projects.

Why do it this way?

Following this format gives each project the time required to have a thorough and helpful sprint review, and keep things on track for both the team and the client. It allows the client to see their features come to fruition and gives them the opportunity to ask questions in real time to the developers who do the actual work. It also allows the developers to hear feedback directly from the clients and gives both an opportunity for dialogue. Finally, POs can get a sense of how to start adjusting the backlog for the upcoming sprint.

If you found this helpful, check out these other project management tips.

Caktus GroupBasics of Django Rest Framework

What Is Django Rest Framework?

Django Rest Framework (DRF) is a library which works with standard Django models to build a flexible and powerful API for your project.

Basic Architecture

A DRF API is composed of 3 layers: the serializer, the viewset, and the router.

  • Serializer: converts the information stored in the database and defined by the Django models into a format which is more easily transmitted via an API
  • Viewset: defines the functions (read, create, update, delete) which will be available via the API
  • Router: defines the URLs which will provide access to each viewset A graphic depicting the layers of a Django Rest Framework API.

Serializers

Django models intuitively represent data stored in your database, but an API will need to transmit information in a less complex structure. While your data will be represented as instances of your Model classes in your Python code, it needs to be translated into a format like JSON in order to be communicated over an API.

The DRF serializer handles this translation. When a user submits information (such as creating a new instance) through the API, the serializer takes the data, validates it, and converts it into something Django can slot into a Model instance. Similarly, when a user accesses information via the API the relevant instances are fed into the serializer, which parses them into a format that can easily be fed out as JSON to the user.

The most common form that a DRF serializer will take is one that is tied directly to a Django model:

  
class ThingSerializer(serializers.ModelSerializer):
    class Meta:
        model = Thing
        fields = (‘name’, )
  

Setting fields allows you to specify exactly which fields are accessible using this serializer. Alternatively, exclude can be set instead of fields, which will include all of the model’s fields except those listed in exclude.

Serializers are an incredibly flexible and powerful component of DRF. While attaching a serializer to a model is the most common use, serializers can be used to make any kind of Python data structure available via the API according to defined parameters.

ViewSets

A given serializer will parse information in both directions (reads and writes), but the ViewSet is where the available operations are defined. The most common ViewSet is the ModelViewSet, which has the following built-in operations:

  • Create an instance: create()
  • Retrieve/Read an instance: retrieve()
  • Update an instance (all fields or only selected fields): update() or partial_update()
  • Destroy/Delete an instance: destroy()
  • List instances (paginated by default): list()

Each of these associated functions can be overwritten if different behavior is desired, but the standard functionality works with minimal code, as follows:

  
class ThingViewSet(viewsets.ModelViewSet):
    queryset = Thing.objects.all()
    serializer_class = ThingSerializer
  

If you need more customization, you can use generic viewsets instead of the ModelViewSet or even individual custom views.

Routers

Finally, the router provides the surface layer of your API. To avoid creating endless “list”, “detail” and “edit” URLs, the DRF routers bundle all the URLs needed for a given viewset into one line per viewset, like so:

  
# Initialize the DRF router; only once per urls.py file from rest_framework import routers`
router = routers.DefaultRouter()

# Register the viewset
router.register(r'thing', main_api.ThingViewSet)
  

Then, all of the viewsets you registered with the router can be added to the usual url_patterns:

url_patterns += url(r'^', include(router.urls))

And you’re up and running! Your API can now be accessed just like any of your other django pages. Next, you’ll want to make sure people can find out how to use it.

Documentation

While all code benefits from good documentation, this is even more crucial for a public-facing API, since APIs can’t be browsed the same way a user interface can. Fortunately, DRF can use the logic of your API code to automatically generate an entire tree of API documentation, with just a single addition to your Django url_patterns:

url(r'^docs/', include_docs_urls(title='My API')),

Where next?

With just that simple code, you can add an API layer to an existing Django project. Leveraging the power of an API enables you to build great add-ons to your existing apps, or empowers your users to build their own niche functionality that exponentially increases the value of what you already provide. For more information about getting started with APIs and Django Rest Framework, check out this talk.

Caktus GroupAdd Value To Your Django Project With An API

How do your users interact with your web app? Do you have users who are requesting new features? Are there more good feature requests than you have developer hours to build? Often, a small addition to your app can open the door to let users build features they want (within limits) without using more of your own developers’ time, and you can still keep control over how data can be accessed or changed. That small addition is called an application programming interface, or API. APIs are used across the web, but if you aren’t a developer, you may not have heard of them. They can be easily built on top of Django projects, though, and can provide great value to your own developers as well as to your users.

What Is An API?

At its core, an API is essentially an interface which allows two pieces of software to talk to each other. This usually refers to a request that reaches across the web to a third-party service, although it can also be used to allow two of your own apps to talk to each other.

Why Would I Want One?

As a user, there are many reasons you might want access to an app’s data. How often do you think “this would be great if they added just one other feature!”

We’d all like to think our apps address all our users’ needs, but there will always be a subset who have a corner-case use that they’d like to implement. If only a few dozen people would use that feature, but you have a lengthy backlog of other features that a more significant number of users would use, then you’re likely to prioritize the features that will help the most people.

With an API, that small subset can write (or hire someone to write) an add-on which gives them their niche feature. Multiply that by the dozens of small niche subsets of users who have different wishlists and you might have a bunch of users who would benefit from just one new feature: an API.

Is It Worth The Cost?

As with many software products, the value proposition depends on the amount of time that will be invested in building the feature, but an API doesn’t have to take much investment! As previously mentioned, an API can be easily layered on top of an existing Django project, so if you have Django apps, you may be closer than you think.

One of the greatest values an API can provide is that users may attach themselves to your product, making it an integral part of their operations. If they only use the features that laid out on your website, then another company can come along and build a competing service that handles all of those functions plus some, or for a lower cost. On the other hand, if they use just 70% of the features you advertise but have integrated your service into their operations by using your API, then they would have to re-write those integrations to move to another service. Suddenly, that API is a really strong reason to stick with your service rather than hop to the newest player in the field.

Getting started

If you don't have an in-house development team to help with an API, the work can be contracted out to a web development company like Caktus. Contact us to start developing an API for your Django project.

Philip SemanchukA Python 2 to 3 Migration Guide

It’s not always obvious, but migrating from Python 2 to 3 doesn’t have to be an overwhelming effort spike. I’ve done Python 2-to-3 migration assessments with several organizations, and in each case we were able to turn the unknowns into a set of straightforward to-do lists.

I’ve written a Python 2-to-3 migration guide [PDF] to help others who want to make the leap but aren’t sure where to start, or have maybe already begun but would like another perspective. It outlines some high level steps for the migration and also contains some nitty-gritty technical details, so it’s useful for both those who will plan the migration and the technical staff that will actually perform it.

The (very brief) summary is that most of the work can be done in advance without sacrificing Python 2 compatibility. What’s more, you can divide the work into manageable chunks that you can tick off one by one as you have time to work on them. Last but not least, many of the changes are routine and mechanical (for example, changing the print statement to a function), and there are tools that do a lot of the work for you.

You can download the migration guide here [PDF]. Please feel free to share; it’s licensed under a Creative Commons Attribution-ShareAlike license.

Feedback is welcome, either via email or in the comments below.

 

Caktus GroupUX Research Methods 3: Evaluating What Is

In previous blog posts on UX research methods, I discussed techniques we use to understand how users think and feel, what they need and want, and why; and those we use to analyze and understand user behavior.

Another group of techniques frequently included in UX research methods does not involve a direct study of users, but rather an evaluation of the landscape and specific instances of existing user experience.

Competitive Landscape Review

Competitive landscape review is typically done as a qualitative (generative or evaluative) study of a small sample of direct and indirect competitors. Direct competitors are companies that offer the same, or a very similar, value proposition to the same customer segment that our client serves. Indirect competitors are companies that offer a similar value proposition to a different customer segment from that served by our client.

During a competitive landscape review, we look at three to five direct competitors and no more than three indirect competitors. For each competitor, we analyze:

  • Market positioning
  • How long they’ve been on the market
  • What the delivery method of their software is
  • Who their primary user segments are

We also look for reviews of the competitors’ products to better understand what their users like and what they don’t like. Finally, we create a feature matrix to compare key features across competitors’ products, and identify windows of opportunity for our client.

Content Audit

A content audit is performed as a qualitative, evaluative research method that can be employed to better understand the current state of an existing application or website. It is most relevant in the case of content-heavy, marketing websites that need redesign.

Content auditing is a process of creating and evaluating an inventory of all content and assets on a website, including recording content structure and relationships between content blocks. It may also include an analysis of the vocabulary used as part of the user interface in order to assess its quality and consistency. It is a great tool to employ ahead of a content modeling discovery workshop.

UX Review

A review of user experience of an existing website or application is a qualitative, evaluative method that allows the reviewer(s) to analyze the current state through one of the following approaches.

Heuristic Evaluation

This type of review compares the current state of a website or an application to an established set of usability heuristics (best practices or rules-of-thumb), and identifies where the current state falls short in terms of its adherence to those heuristics.

The best known and most widely-used set of heuristics was developed by Rolf Molich and Jacob Nielsen and has become an industry standard. Because this type of review relies on an established standard, it can be performed by anyone who has access to that standard. It is also recommended that a heuristic evaluation is done by more than one reviewer.

Expert Review

An expert review does not have to rely strictly on a prescribed set of heuristics. The set of best practices an expert review references may be broader or narrower. Some websites or applications may require an approach that does not adhere to all standard heuristics.

For example, Nielsen’s heuristics stipulate “aesthetic and minimalist” design. While that guideline is considered the best practice for many types of application, it may not apply to some. In games, the experience relies heavily on a very rich (and certainly not minimalist) aesthetic. Because an expert review affords more flexibility than a heuristic evaluation, it should be performed by someone with an expertise in UX best practices. It can be done by one expert.

Selecting UX Research Methods for a Project

Evaluating the current user experience is done at the onset of a project. For a new project, or a redesign of an existing project, conducting a competitive landscape review can provide insights into user experience solutions already on the market and opportunities for innovation. A redesign project will benefit from taking stock of what is:

  • What are the structure and components of the existing content?
  • Does the design and experience of the current website adhere to established heuristics (rules-of-thumb)?
  • What best practices are currently not, but should be implemented in the redesigned application?

As is the case with other UX research methods, not all techniques listed in this group have to be employed on a single project.

If you’re not sure what UX research will benefit your project most, get in touch. We can help.

Caktus GroupUX Research Methods 2: Analyzing Behavior

Previously, I explained interviews, surveys, and card sorting as techniques that help UX researchers understand how users think and feel, what they need and want, and why. In this post, I will review UX research methods best suited to understand user behavior and its causes.

As mentioned before, there exist many UX research methods, but not all of them have to be employed on any given project. The exact selection of techniques depends on the specific needs of a project, its budget, and timeline.

Usability Testing

By usability testing, we specifically mean an evaluative, behavioral research method that consists of observing users (directly or indirectly) while they complete specific tasks on a website or within an application. At Caktus, we conduct qualitative usability testing during which we observe the user’s interactions with a website or an application.

It’s worth noting that usability testing can be undertaken with different goals in mind:

  • As a formative study to evaluate the current state of usability of a website, ahead of a redesign.
  • As a summative study to evaluate the final state of a feature or a website at the end of a project (or a development cycle).
  • As a formative assessment of a competitor's website or application to understand what usability problems exist and should be avoided.

Moderated usability testing

Moderated usability testing is a study moderated by a Caktus UX designer. It can be done in person on-site, or remotely by leveraging a third-party platform that allows us to connect with the user over the internet, have them share their screen, observe as they complete the tasks they’re presented with, and record the entire session. The platform also allows other observers to join in remotely, a great way for the client stakeholders to gain a direct insight about their product.

Unmoderated usability testing

Unmoderated usability testing is conducted with the help of a third-party platform that allows us to create tasks, deliver them to the user along with a link to the website or application under evaluation, and record the session during which the user is completing the tasks. We can then evaluate the recording and analyze the findings in order to issue recommendations.

On-site observation

On-site observation is a qualitative study that can result in behavioral or attitudinal insights. When done as generative research, it consists of observing users during their daily, work routines in order to better understand how they work, what their needs and pain points are, etc. When conducted as evaluative research, it means observing users completing tasks within an application in order to identify usability problems. The latter may seem similar to usability testing. There is, however, an important difference between the two approaches.

In usability testing, the participants are novice application users (users who had not used the application before) and the researcher provides them with tasks that imitate real-world scenarios. In an on-site observation, the researcher observes people who use the application in their work. Users walk the researcher through their workflows in the application, pointing out what’s working and what’s not working. The researcher gains insights that are not only behavioral (representing what users do while interacting with the application), but also attitudinal (representing what people think and say, what their opinions are).

Treejack testing

Treejack testing is a qualitative or quantitative (depending on the participant sample size), evaluative method that allows us to assess how well information architecture and/or a navigation design pattern aligns with the users’ mental model. It consists of asking users to find labels representing content items within a tree-like model of information architecture or the navigation. At Caktus, we conduct treejack-testing with the help of a third-party service. It allows us to measure not only the success and failure rates, but also to see the path a user takes to locate each content item.

First-click testing

First-click testing is typically a quantitative, evaluative, behavioral method, in which users are presented with static images of an interface (either screenshots or high fidelity mockups) and asked to complete tasks by clicking on what they interpret as interactive elements of the interface, e.g., links or buttons. The premise of this approach is founded in a 2009 study (3), which showed that the user’s first click is a good indicator of a successful completion of a task. In other words, if the user’s first click is correct, they’re more likely to find what they’re looking for than if their first click is incorrect. When done with a large sample of participants, results of first-click testing are a good predictor of usability of the UI elements being tested.

At Caktus, we have used first-click testing as a qualitative method in an iterative series of tests that include card sorting, treejack testing, and first-click testing. In this approach we employ first-click testing in a way similar to treejack testing, as a method to assess the efficacy of a design that resulted from card-sorting. We leverage a third-party platform to perform first-click testing.

Analytics Review

Analytics review is a quantitative, behavioral, evaluative research method. We use it to supplement the qualitative research we do. While a source of valuable data, analytics on its own does not necessarily deliver answers to questions about the quality of user experience or about usability. In combination with qualitative methods, however, it can enhance the process of diagnosing existing problems and improving user experience.

Analytics review consists of reviewing a set of metrics that an application’s or website’s analytics tool captures, e.g.

  • paths users take to reach certain content, sources of incoming traffic;
  • keywords used to find the content of interest;
  • events (or user interactions) on a page e.g., clicks, downloads, etc.;
  • conversion rates;
  • time spent on a page;

and more. In addition, reviewing a website’s search logs can be an insightful source of information about content users frequently look for or are not finding by means of the website’s main navigation.

Selecting UX Research Methods for a Project

The research methods we employ to analyze and understand user behavior can be helpful at any stage of a project.

We may begin a redesign project with:

  • Analytics review to gain insights about user behaviors on the current website or in an application
  • Usability testing of the current website to uncover existing usability problems
  • Competitive usability testing to reveal which digital experiences work well and which do not
  • On-site observations of users with or without the technology the project is concerned with

We may test initial designs for the project by conducting:

  • Treejack testing
  • First-click testing
  • Usability testing

And we monitor the usability of the implementation by conducting moderated or unmoderated usability testing.

Resources

For further reading, I suggest the following:

  1. UX Research Cheat Sheet, Susan Farrel, Nielsen Norman Group
  2. When to Use Which User-Experience Research Methods, Christian Rohrer, Nielsen Norman Group
  3. Bailey R.W., Wolfson C.A., Nall J., Koyani S. (2009) Performance-Based Usability Testing: Metrics That Have the Greatest Impact for Improving a System’s Usability. In: Kurosu M. (eds) Human Centered Design. HCD 2009. Lecture Notes in Computer Science, vol 5619. Springer, Berlin, Heidelberg

Caktus GroupUX Research Methods 1: Understanding Thought Processes, Motivations, and Needs

In a previous blog post, Types of UX Research, I discussed how UX research can be classified. I explained qualitative and quantitative, generative and evaluative, formative and summative, and attitudinal and behavioral types of research. Within each of these categories of research, there are several methods that can be used to reach specific project objectives.

It is good to have a range of research methods at one’s disposal, but it’s not necessary to use them all. Particular project needs, the project budget, and the project timeline are all factors that must be taken into account when deciding on which methods to use. Below I discuss specific techniques we use at Caktus to understand users’ thought processes, motivations, and needs.

Interviews

Interviews are a qualitative, attitudinal, generative research method typically used at the onset of a project. They are a great way to gather information ahead of a discovery workshop. They can also be conducted after a discovery workshop to help fill in knowledge gaps discovered during the workshop.

User Interviews

We talk to users to gain insights about who they are; what needs, wants, and pain points they have; in what contexts they operate; what their mental models are, etc. User interviews help us understand the user goals and outcomes that the application we are building must support, and are a basis for developing personas that guide the design and development process. Recruitment of participants for user interviews is done with the help of the client or through a third-party recruiting service that allows us to screen potential participants and select a well-matched target group.

Stakeholder Interviews

While understanding user needs and goals is paramount to requirements gathering, understanding business goals is equally important. Business goals should encompass user goals, but they are voiced from the perspective of the business. We learn about business goals, as well as the client’s perspective on user needs and pain points, by talking to client stakeholders.

Surveys

Surveys are primarily used as a quantitative research method for generative or evaluative purposes. They allow us to collect information from larger groups of respondents and generally result in numeric data. They can also be administered to collect qualitative data through open-ended questions. When used as a generative tool, a survey can inform a discovery workshop or used to fill in knowledge gaps after the workshop. When used as an evaluative tool, a survey can be administered as formative research to evaluate an initial state of an application, or as summative research to assess the final or near-final state of an application.

Card sorting

Card sorting is a qualitative or quantitative (depending on the participant sample size), generative method often used to refine the information architecture of an application or website and to gather insights on which to base navigation design. In this type of study, participants are asked to group items (cards) representing the website’s content into categories that make sense to them. If names of the categories are provided by the researcher, the approach is called closed card sorting. If users are asked not only to categorize items, but also to create and name their own categories, the approach is called open card sorting. A mixed approach (with some categories pre-determined by the researcher, and some left to the participants to create) is called hybrid card sorting. At Caktus, we conduct remote card sorting studies via a third-party platform.

Selecting UX Research Methods for a Project

Interviews, surveys, and card sorting are all methods particularly useful at the onset of a project, although they could also be employed at later stages if clarification of requirements is needed. They help us understand how users think and feel, what they need and want, and why. Based on that understanding, we are better prepared to design a solution that delivers value for the target user segment.

At Caktus, we tailor the selection of research methods to project’s objectives. If understanding users’ needs in quantitative terms is necessary, for example if it is paramount to have confidence that a majority of users display a particular preference or need, a survey is a great tool. If we want to understand why users display a particular preference or need, or how they think about their day-to-day tasks, interviews are the technique of choice. And to understand how users categorize content that they seek or interact with, we conduct card sorting. On any project, best results are obtained with a combination of UX research methods.

Have a project in mind? We can help you decide where to start and what UX research methods to leverage to give your project the best possible starting point.

More Resources

  1. “UX Research Cheat Sheet”, Susan Farrel, Nielsen Norman Group
  2. “When to Use Which User-Experience Research Methods,” Christian Rohrer, Nielsen Norman Group
  3. “Complete Beginner’s Guide to UX Research”, UX Booth
  4. “7 Great, Tried and Tested UX Research Techniques”, Interaction Design Foundation

Caktus GroupTypes of UX Research

Requirements gathering (or product discovery) is a part of every development project. We must know what to build before we build it, and we must refine our understanding of what we are building as we move along. Discovery workshops are a format well-suited for certain types of projects before development begins, although requirements gathering continues throughout a development project.

Whether conducted at the onset of a project or throughout the development effort, product discovery must be informed by insights and data.

This is the first of four blog posts devoted to conducting research in the context of user-centered design and development. In this post, I will look at the reasons for doing research and the types of research at our disposal. In the next blog posts, I will present and explain the specific user experience (UX) research methods we favor at Caktus.

Reasons for Doing Research

In user-centered application design and development, research is done in order to:

  • Learn who the users are, what they do, how they work, how they feel and think.
  • Describe context(s) in which users operate with and without the technology we’re building.
  • Understand user goals, needs, wants, and pain points.
  • Understand user mental models.
  • Learn how users accomplish tasks in the context of an application as well as independently of any technology.
  • Find out what experiences competitors are building and how those experiences work for users.
  • Gather information necessary to define information architecture and content structure.
  • Test assumptions made about the users, their contexts, and their interactions with the application we’re building.
  • Identify where the application fails to support user outcomes or what needs to be done to support them.
  • Analyse usage patterns of an existing application.
  • Analyze users’ behavioral patterns with regard to the technology under consideration.

Because of its emphasis on users, we call this type of research UX research.

Types of UX Research

UX research can be classified in a variety of ways. It’s helpful to be familiar with these classifications in order to understand what type of research can be applied when and for what purpose.

Quantitative vs. Qualitative Research

The classification of research into quantitative and qualitative is based on the type of methodology involved.

Quantitative research is used to measure user behavior and helps answer the what, how much, and how many types of questions:

  • How many pages does a user navigate to during a visit?
  • With what frequency are users accessing the application on certain devices?
  • How many new and how many returning visitors does the application have per a time period?
  • How much time do users spend on a given page?
  • What is the distribution of keywords that users search for?
  • How many searches for a given keyword have been run in a period of time?
  • How many conversions occur on version A of the page, and how many on version B?

When done with a large enough sample of participants, quantitative research can deliver statistically significant results.

Qualitative research is done to describe user behavior and can be conducted with smaller samples of users. It results in descriptive outcomes that help understand the nuances of user contexts, behaviors, and interactions with technology. It seeks to understand the why of users’ actions:

  • Why are users spending more time on this page than on the other page?
  • Why are users converting better on version B of the page?
  • Why do people fail to complete a task?
  • Why are users frustrated by this feature?
  • Why do people need that feature?
  • Why do users have trouble understanding how to use the application?

While many people favor quantitative research, it is worth noting that some insights can only be found through qualitative research.

Quantitative and qualitative research work best when done in tandem. Both types of research can be employed at the onset of and throughout a project

Generative vs. Evaluative Research

The classification of research into generative and evaluative is based on the intention with which research is conducted.

Generative research is done to generate information about the users and ways in which they operate. It involves learning about who the users are, what they do, how they do it, why they do what they do in a particular way, what frustrates them, what makes them happy, in what contexts they take an action, etc.

Generative research helps define the problem under consideration. The bulk of generative research is done at the beginning of a project, but it can continue at a smaller scale throughout the project if the problem requires further clarification.

Evaluative research is done to assess something that exists, e.g., a design or an application. The types of questions that evaluative research can help answer include:

  • Is the design solving the problem for users?
  • How is the application performing?
  • Can users complete tasks easily?
  • Which features are a source of frustration?
  • Where and when are users unable to complete tasks correctly, and why?
  • What works great, what does not, and why?

Evaluative research can be conducted at any time throughout the project as long as there is something to evaluate. Early sketches, paper or digital prototypes, and implemented interfaces can all be subject to evaluative research.

Quantitative, qualitative, or a combination of these methods can be used in either generative or evaluative research.

Formative vs. Summative Research

Formative and summative research are types of evaluative research. The difference between them lies in when in a project they are conducted and for what purpose.

Formative research is typically done at the onset of a project or development cycle to assess the current state of a feature, a website, or an application. It helps identify problems to be solved (for example, pain points the users experience when interacting with an application).

Summative research is a process of evaluating the final or near-final state of a feature, a website, or an application at the end of a project or development cycle. It helps evaluate whether a design, feature, or application/website meets the user goals. If a project or development cycle started with formative research, the results of summative research can be compared to those of the formative research in order to measure success or progress.

Quantitative, qualitative, or a combination of methods can be used in either formative or summative research.

Attitudinal vs. Behavioral Research

Attitudinal and behavioral research derive classification from the nature of the obtained information.

Attitudinal research is about what people say. By learning what people say, we gain insight into what they think, feel, and want.

On the other hand, in behavioral research we watch what people do. By watching user actions, we can determine what they need to reach the desired outcomes, catch a glimpse of the mental models they bring into their interactions with technology, and understand what needs to be done to align the technology with users’ mental models.

Users are people, and people are not fully self-aware. Unconscious mental processes occur faster than conscious ones and as a result, people may make decisions and choices without fully knowing why. For that reason, simply listening to what people say (as we do in attitudinal research) may not be sufficient to understand requirements thoroughly. Watching users complete tasks is often necessary to understand what they need and expect from technology we’re building.

Coming Up Next: UX Research Methods

It is helpful to understand the various types of UX research available to us to fully appreciate the value of research in user-centered application design and development. In the next blog post, I will discuss the specific UX research methods we use at Caktus to inform requirements gathering for the projects we build.

Caktus GroupQuick Tips: How to Find Your Project ID in JIRA Cloud

Have you ever created a filter in JIRA full of project names and returned to edit it, only to find all the project names replaced by five-digit numbers with no context? The trial and error approach (deleting and restoring numbers one by one until the project you wanted to remove no longer appears in the filter results) is painful. So, how do you find the ID for a project?

Previous version of JIRA

Step 1. As an admin user, select the gear to open the admin dropdown and select Projects under JIRA Administration. The admin dropdown in the previous version of JIRA.

Step 2. Select your project from the list.

Step 3. Once on the project summary page, select Details on the left.

Step 4. The project ID appears at the end of the URL. The project ID can be found at the end of the URL.

On the new JIRA experience:

Step 1. As an admin user, select Projects from the left nav. The left navigation menu in the new JIRA experience.

Step 2. Select your project from the list.

Step 3. Once on the project page, select Settings at the bottom of the project nav. The project page nav in the new JIRA.

Step 4. The project ID appears at the end of the URL.

Happy filtering! For more JIRA tips check out our previous post on how to change your name in JIRA.

Philip SemanchukSetuptools Surprise

Summary

I recently tripped over my reliance on a simple (and probably obscure) feature in Python’s distutils that setuptools doesn’t support. The result was that I created  a tarball for my posix_ipc module that lacked critical files. By chance, I noticed when uploading the new tarball that it was about 75% smaller than the previous version. That’s a red flag!

Fortunately, the bad tarball was only on PyPI for about 3 minutes before I noticed the problem and removed the release.

I made debugging harder on myself by stepping away from the project for a long time and forgetting what changes I’d made since the previous release.

Background

In February 2014, I (finally) made my distribution PyPI–friendly. Prior to that I’d built my distribution tarballs with a custom script that explicitly listed each file to be included in the tarball. The typical, modern, and PyPI–friendly way to build tarballs is by writing a MANIFEST.in file that a distribution tool (like Python’s distutils) interprets into a MANIFEST file. A command like `python setup.py sdist` reads the manifest and builds the tarball.

That’s the method to which I switched in February 2014, with one exception—since my custom script already contained an explicit list of files, it was easier to write a MANIFEST file directly and skip the intermediate MANIFEST.in. That works fine with distutils.

I released version 1.0.0 of posix_ipc in March of 2015, and haven’t needed to make any changes to the code until just now (the beginning of 2018). However, in February 2016, I made a small change to setup.py that I thought was harmless. (Ha!)

I added a conditional import of setuptools so that I could build wheels. (Side note: I really like wheels!) The change allows me to build posix_ipc wheels on my laptop where I can ensure setuptools is available, but otherwise falls back on Python’s distutils which works just fine for everything else I need setup.py to do, including installing from a tarball. The code looks like this —

try:
    import setuptools as distutools
except ImportError:
    import distutils.core as distutools

The Problem

Just a few days ago, I released a maintenance release of posix_ipc, and it was then I noticed that the tarballs I built with my usual python setup.py sdist command were 75% smaller and missing several critical files. Because it had been 23 months since I made my “harmless” change to setup.py, the switch from using distutils to setuptools wasn’t exactly fresh in my mind.

However, some examination of my commit log and a realization that this was the first release I’d made after making that change gave me a suspicion, and grepping through setuptools‘ code revealed no references to MANIFEST, only MANIFEST.in.

There’s also this in the setuptools documentation, if I’d bothered to read it—

[B]e sure to ignore any part of the distutils documentation that deals with MANIFEST or how it’s generated from MANIFEST.in; setuptools shields you from these issues and doesn’t work the same way in any case. Unlike the distutils, setuptools regenerates the source distribution manifest file every time you build a source distribution, and it builds it inside the project’s .egg-info directory, out of the way of your main project directory.

So that was the problem—setuptools doesn’t look for a MANIFEST file, only MANIFEST.in. Since I had the former but not the latter, setuptools used its defaults instead of my list of files in MANIFEST.

The Solution

This part was easy. I converted my MANIFEST file to a MANIFEST.in which works with both setuptools and distutils. That’s probably a more robust solution than the hardcoded list in MANIFEST anyway.

I’m pleased that posix_ipc has been stable and well-behaved for such a long time, but these long breaks between releases mean a certain amount of mental rust has always accumulated when it’s time for the next one.

By the way, the source for posix_ipc is now hosted on GitHub: https://github.com/osvenskan/posix_ipc

Caktus GroupCulture of Unit Testing

Unit testing is something that deeply divides programmer communities. Nearly everyone agrees that it’s good to have unit tests in place, but some developers question whether the time invested in writing unit tests would be better spent writing “real” code, doing manual QA, or debugging.

In practice, it's a good use of time and should be standard in any company which takes pride in its end product.

Real-world examples

On one project, we self-enforced a requirement that at least 90% of our code is covered by unit tests at any given time. We automated this so that, if our code drops below that level, it won’t be merged into the main codebase until enough tests have been written to bring it back up. This ensures that tests are written as code is written, avoiding the monstrous task of writing tests for an already-massive codebase which has no tests yet.

There have been times when we have been on the verge of not finishing a task within the time we had planned and tests haven’t been written for that code yet. It’s extremely tempting in that situation to skip test-writing. In a company that values deadlines over quality, such tests would likely be skipped, but we’ve made a different choice at Caktus. I think it’s the right one.

At least a couple times a month I find myself writing tests for the code I’ve just written and realizing that I had omitted a check for an edge case. These usually take little time to fix. Writing tests can also help me think about how the code should be structured, particularly encouraging me to make it more modular. Not only does that increase readability, but it also can make it easier to update later as requirements change.

When I think about tests, I automatically go straight to the edge cases. A manual QA process may or may not catch problems with rare or unusual inputs and it can take a lot of time to manually test numerous edge cases. But having written an automated test, I ensure that the edge case continues to be handled according to the client’s specifications.

These same unit tests made a large refactoring process much easier. Going into the process, I knew that the change I was making would require compensatory changes in dozens of other places in the code, and I’m sure I would have eventually located all of the places it needed to change anyway. But, since we already had thorough test coverage, I was able to make the initial change, run the test suite, and use the test failures to know where I needed to make changes in the existing code. I also knew when I was done because all the tests were passing again. One final scan through the code confirmed that I hadn’t missed anything, and subsequent real-world tests have confirmed that everything seems to be working fine. Because of the attention to tests throughout the process, the client could be assured of a consistently high-quality product with very few bugs in less time than it would take without the tests.

Establishing a culture of testing

The first step in establishing testing as a standard part of the coding process is simply to measure it. Plenty of tools are available to measure your testing and get reports on what’s being covered and what isn’t. The best starting point is to use coverage, which will tell you how much of your code is being executed by your existing tests. As Caktus chose to do in the above example, a minimum coverage level can be set which must always be maintained, which works great when implemented at the start of a project and adhered to consistently.

If trying to add testing to existing code, the same principle can be applied with some minor tweaks. Unless you have the luxury of putting a hold on new code while tests are written (unlikely!), you will probably need to gradually add tests. The most reasonable way to do this is to either set goals for coverage or to impose a requirement that the coverage must always go up (until it reaches a reasonably high level).

Regardless of the application, if unit tests are consistently expected, the team will get faster and better at implementing them.

Code maintenance

It’s often asserted that a test suite is simply more code to maintain. While technically true, tests, once written, should only need to change if the requirements also change. This means that the tests should not need to be tweaked constantly. When they do need to be tweaked, that also helps streamline the process of finding the code that needs to change. Most of the time, the tests will sit untouched and do their job, asserting that all of the code is working as expected, with no maintenance required. When a test needs to be changed, it is again doing its job, pointing to code that is involved in changing requirements. No test should be changed just because it fails. A failing test tells you that either the requirements (and therefore code) changed, or that the test was not written correctly in the first place.

False sense of security?

One drawback of unit tests is that they can make you feel like everything is working great, and reduce motivation to do real-world testing. While unit tests make a great first pass over the code, there is no substitute for genuine QA. The tests should make the QA process go faster, as some of the more obvious bugs will be found before any manual testing happens, but QA will always still be needed. Even if a codebase has 100% coverage, there’s no guarantee that something hasn’t been missed. A bug in a test can easily disguise a bug in the code.

Reflections on testing

It took a not-insignificant amount of time for me to get the hang of writing unit tests when I was new to the concept, but my learning time has been more than made up for by the time those same tests have saved me. Testing is now second-nature to me, and I can write unit tests in no time when I am testing code I’ve just written. It only takes a few extra minutes and it so often catches errors or assists in later coding that I can’t imagine not taking the time to write tests from the beginning.

Certainly, tests need to be fairly comprehensive in order to gain all these benefits, but even a small test suite can be helpful and test coverage can be increased bit by bit if tests are written with every new pull request. We have made concerted efforts to establish test coverage on existing, untested code before, and that’s great if you have the time. If not, though, just remember that some is better than none, and increasing is better than stagnating.

Next steps

If you want to work on increasing emphasis on tests in your own projects, here are some strategies to think about:

  • Practice writing tests for every bug fix or new feature (better yet, before starting on them!)
  • Get in the habit of running test suites frequently
  • Implement a policy that every pull request should include a test for the feature or bug being worked on
  • Implement a policy that code coverage should not go down on any pull request
  • Run mutation testing to find places where coverage is fine, but results of the executed code are not actually being tested

Ready to get started? Read more about testing and code quality.

Caktus GroupCaktus Blog Best of 2017

With 2017 now over, we highlight the top 17 posts published or updated on the Caktus blog this year. Have you read them all?

  1. Using Amazon S3 to Store your Django Site’s Static and Media Files: Our most popular blog post was updated in September 2017 with new information. Learn how to use Amazon S3 to serve static and media files and improve site performance.
  2. A Production-ready Dockerfile for Your Python/Django App: Docker provides a solid way to containerize an app. This blog post includes a Dockerfile ready for your use plus instructions on how to use it in a project.
  3. Python Type Annotations: Type annotation support in Python helps developers avoid errors. Read this post for a quick overview on how to use them.
  4. Digging Into Django QuerySets: Learn how to use the Django shell with an example app to perform queries.
  5. Hosting Django Sites on Amazon Elastic Beanstalk: We use AWS Elastic Beanstalk for deploys and autoscaling. This post introduces the basics and how to use it with Python.
  6. SubTests are the Best: Good tests are important to good code, but what makes a good test? Three factors are detailed in this post, which was also presented as a talk at PyOhio 2017 and can be watched on YouTube.
  7. Writing Unit Tests for Django Migrations: Another all-time top blog post which received an update this year, with a walkthrough demonstrating how to write thorough tests for multiple versions of Django.
  8. Managing Your AWS Container Infrastructure with Python: Introducing CloudFormation and Troposphere as tools to host and manage Python apps on AWS.
  9. New Year, New Python: Python 3.6: Highlights from the Python 3.6 release, including secrets, new string interpolation methods, variable type annotations, and more.
  10. Advanced Django File Handling: Customize Django’s file handlers for more flexibility. This post shows you how.
  11. 5 Ways to Deploy Your Python Web App in 2017: Part of our PyCon 2017 Must See Series, this summary also includes the video of the talk at PyCon. Take a look at a live app deployment with ngrok, Heroku, AWS Lambda, Google Cloud Platform, and Docker.
  12. Python Tool Review: Using PyCharm for Python Development - and More: One of our developers reviews the PyCharm IDE for Python. Learn more about how it’s used at Caktus in this interview with our developers (from JetBrains).
  13. Opening External Links: Same Tab or New?: An exploration of the debate around how external links should open, with perspectives from marketing, UX, web development, and users.
  14. Building a Custom Block Template Tag: A walkthrough of how to build a block tag, with references to relevant Django documentation.
  15. 3 Reasons to Upgrade to the Latest Version of Django: For business stakeholders new to website development, we offer three reasons why upgrading the technology behind the site should be considered a necessity.
  16. From User Story Mapping to High-Level Release Plan: The user story map created as part of a discovery workshop is an excellent tool to use in writing the first release plan for a development project. Find out why in this post.
  17. How to Make a jQuery: Recreate the most helpful parts of jQuery to learn how to develop without it.

Going into 2018

What were your favorite posts? What topics did you find most interesting or helpful? What are you hoping to learn about in 2018? Let us know in the comments or on Twitter what you’d like to see more of in the coming year.

Caktus GroupSouthern Fried Agile 2017 Recap

I attended the Southern Fried Agile conference in November 2017, where I heard some excellent talks and connected with local Agilists in Charlotte, NC. Southern Fried Agile is the sister conference of TriAgile, which I also attended this year.

The keynote address by Rich Sheridan, CEO of Menlo Innovations and author of Joy, Inc., set the tone for the day. He inspired the audience by describing the Agile culture and mindset of his company. I took away some innovative ideas from this talk, including: rigorous pair programming that rotates partners every week; demos where the customer uses the software that was built while the team observes and gathers feedback; a culture of minimal meetings that makes use of the open space for constant communication, effectively reducing the need for meetings; and stakeholder prioritization techniques that make use of physical size of pieces of paper to represent level of effort. The picture he painted of the company culture was both memorable and aspirational, and I hope to see more of these examples in the future of Agile.

The most interesting talk I heard was by Sally Elatta, president of Agile Transformation Inc., on "Scaling Agile Metrics and Measuring What Matters." Her presentation emphasized that agility starts at the top of an organization. An Agile transformation that is dictated rather than demonstrated will suffocate teams. A healthier culture is produced when company leadership sets the example and participates in agility. This resonated with me and helped me understand how Agile concepts and techniques can be applied outside of development teams. The talk focused on a system of metrics for Agile measurement at the team, program, and business levels, which I look forward to trying!

Another enlightening talk was "Overcoming Resistance - How to Engage Developers in Agile Adoption" by David Frink from Ipreo. He outlined reasons that developers may not feel engaged with Agile, as well as signs of non-engagement. Using the elephant and rider metaphor (where the elephant represents a person’s emotions, passion, fear and the rider represents logic, analysis, planning), the talk provided ways to motivate both the elephant and the rider. He also explained why it's essential to address the two together. Some methods are:

  • Putting the developers in touch with their users with tools like usability studies, to build a sense of empathy
  • Giving them goals and challenges instead of predetermined solutions, so they can use their creativity to produce the best solutions
  • Protecting their focused time to let them maximize flow (time “in the zone”)
  • Uncovering resistance with techniques like Fist of Five
  • Giving positive feedback to reinforce and build upon Agile behaviors

I also heard Rob English from CapitalOne talk about "Leading a Scrum Master Evolution," making a strong case for Scrum Masters to move in a more technical direction and build more domain knowledge; "Gain Organizational Efficiencies with Kanban" by Yvonne Kish, outlining the benefits of Kanban throughout multiple areas of an organization (delivery, portfolio, and business levels); "Minimum Viable Process" by Nick Smith from Fidelity, describing his team's Scrum culture; and finally "Motley Crews: Lives & Deaths of Cinematic Teams" by James Collins from Wells Fargo, featuring movie clips about teams and their evolution.

The larger themes from this year’s conference were a renewed emphasis on building and supporting autonomous teams, minimizing process to be as lightweight as possible, and a focus on using empirical data to inspect and adapt at multiple levels. Events like this help bring me back to the spirit of Agile when I get too bogged down in the day-to-day. They are also an excellent way to network and hear new ideas! The conference delivered high value for an affordable registration fee and I would recommend it to anyone working in development in or around North Carolina.

Caktus GroupYear-End Charitable Giving 2017

Twice a year we solicit proposals from the team for contributions to non-profit organizations in which individual Cakti are involved or that have impacted their lives. Our charitable giving program is a chance to support not only our own employees but the wider community. This quarter we are pleased to donate to the following organizations.

St. John Rescue and Unidos Por Puerto Rico

Logos for charitable organizations St. John Rescue and Unidos Por Puerto Rico

Hurricane relief was in the forefront of our employees’ minds this season. Though storms Maria and Irma hit several months ago, inhabitants of these U.S. territories are still struggling to recover from the devastating effects.

St. John Rescue provides emergency rescue and medical support along with equipment and supplies. They formed in 1995 with the goal of providing improved response services on the island and have been crucial in providing storm relief and emergency assistance.

Unidos Por Puerto Rico is a new initiative formed, organized, and administered by and for Puerto Ricans to provide direct aid in the wake of the year’s storms. One hundred percent of the organization’s proceeds go to helping victims affected by these natural disasters.

Triangle, NC Organizations

Note in the Pocket provides clothing to children identified by various schools and social service agencies as impoverished or homeless and in need of clothing to wear to school.

InterAct works to end domestic and sexual violence in Wake County. They provide a 24-hour crisis line, community outreach programs, court advocacy, an emergency shelter, individual and group counseling, sexual assault services, and youth education and prevention services.

Code the Dream seeks to build a gateway to the tech sector for minority and immigrant youth by offering free coding programs and classes. They also offer a unique chance for their students to gain real world experience by partnering with local businesses and organizations to work on professional projects serving community needs.

Alley Cats and Angels is an all-volunteer, foster home-based, cat rescue dedicated to helping stray, abandoned, and feral cats. Ultimately, this organization seeks to reduce the overall number of homeless cats in the Triangle through their adoption, barn cat, and spay/neuter assistance programs. Foster litters from Alley Cats and Angels regularly come to the Caktus office for socialization and several Cakti have ended up adopting kittens they met through this program!

Daryl Riethof with kitten

Supporting the Arts

WCPE Radio the Classical Station is a non-commercial, independent, listener-supported station dedicated to excellence in classical music broadcasting. In addition, they provide grants supporting classical music education in North Carolina.

The Carrack empowers local artists by providing professional exhibit and performance opportunities in an volunteer-run, zero-commission space located in downtown Durham, North Carolina. They have been essential to the movement for a rejuvenated arts scene in Durham, especially through their efforts to support emerging, experimental, and/or minority artists as well as hosting and funding inclusive events and projects.

Looking Forward

We have administered our Charitable Giving Program since 2014, but it feels especially meaningful around the holidays, encouraging us to look forward at how we might make a difference in the new year. The program also allows us another opportunity to practice and live our values of fostering empathy and supporting our community.

Caktus GroupSupercharging your CSS with Stylus and PostCSS

Here at Caktus the front-end team stays on the bleeding edge by taking advantage of the latest and greatest tools. We only incorporate features into our packaging that are well-supported and production-ready, as well as those that meet our list of standard browser requirements. Luckily, there are plenty of tools that allow us to use experimental technologies with appropriate fallbacks for non-supported browsers.

Getting Started

Our front-end packaging includes npm and gulp to bundle CSS files differently based on our working environments. It is a good idea to separate local development and production environment pipelines in order to optimize each environment. In our package.json file, we use two scripts: dev and build.

"scripts": {
   "build": "./node_modules/.bin/gulp deploy",
   "dev": "./node_modules/.bin/gulp"
},

Dev is used when the project is run on a local development environment. We use tools like sourcemapping, watchers to track when specified files have changed, and livereload to auto refresh browsers when specific triggers are detected.

Our build script is used for staging and production environments. It is set up to concatenate and minify source files into one CSS file that gets served to the client. Both scripts do a fair amount of preprocessing and postprocessing of our style files and allow us to use some powerful features we would not normally be able to access. I will spend the bulk of this post outlining these features and why they are useful to implement in your next project.

Ways to use Stylus

At Caktus we use Stylus as our CSS preprocessor of choice. It has many of the same features as LESS and SASS; however, the added benefit of Stylus comes from its flexible syntax, ability to run functions, and out-of-the-box custom selectors.

In Stylus, you can structure your style files with more syntactic freedom than other CSS preprocessors. For example, if you prefer a more simplistic approach to writing style rules, you can do so:

body
    font 1rem Helvetica Neue, sans-serif
    margin 0
    padding 0

If you prefer the regular CSS syntax, Stylus supports it. Or, if you prefer any variation in between, Stylus also supports that. With flexible syntax, team members can now determine how to write CSS styles and patterns that work for the team as a whole - which has proven to be helpful for team members who do not come from a front-end background. More importantly, flexible syntax allows us to structure our CSS to be less noisy, which improves clarity and comprehension.

New to CSS preprocessors is Stylus' ability to utilize functions. I find this feature particularly useful when computing values that should not be static but rather relative to other values. In a simple example, we can now set the margin of an element based on a specific formula that is relative to an element's position within a container.

<section>
    <div></div>
    <div></div>
    <div></div>
    <div></div>
 </section>
count = 4
divideByHalf(start, end, val)
    if start > end
       return val
    else
       return divideByHalf(start + 1, end, val/2)

section
    for num in (1..count)
            *:nth-child({num})
                    margin: divideByHalf(1, num, 3.5vw)

Evaluates to:

section *:nth-child(1) {
  margin: 1.75vw;
}
section *:nth-child(2) {
  margin: 0.875vw;
}
section *:nth-child(3) {
  margin: 0.4375vw;
}
section *:nth-child(4) {
  margin: 0.21875vw;
}

Stylus comes with many useful selectors. You can now use partial references and even ranges in partial references to assign an attribute to a nested element without worrying that the parent element will also inherit this attribute.

.menu
    .sub-menu
        display: none

        ^[0]:hover ^[-1..1]
            display: block

Evaluates to:

.menu .sub-menu {
  display: none;
}
.menu:hover .sub-menu {
  display: block;
}

Supercharge with PostCSS

Stylus has a lot of useful functionality and features out of the box, but we can do one better: we can postprocess our style files to be even more robust and future-forward! The main library we use to achieve this is PostCSS.

PostCSS allows us to use a plugin called CSSNext (as well as many other plugins), which in turn enables the use of CSS4 features and autoprefixer. These libraries grant us the luxury of offloading some mental baggage when it comes to writing styles and browser-specific support for all the different browser versions, as well as giving us the freedom to experiment with new technology to make our jobs easier and more sane.

So, what does this look like?

First, we need to set our source files and our environment flag:

var options = {
    stylus: {
        src: './myproject/static/stylus/index.styl',
        watch: './myproject/static/stylus/**/*.styl',
        dest: './myproject/static/css/'
    },
    development: true,
}

Next we create the gulp pipeline:

var stylusTask = function () {
    return gulp.src(options.stylus.src)
        .pipe(stylus())
        .pipe(rename('bundle.css'))
        .pipe(gulp.dest(options.stylus.dest));
};

Nothing too crazy here; we preprocess our style files and combine them into a single file called bundle.css and put it in our specified CSS destination folder.

What if we wanted to minify our CSS file to cut down on file size, but also include a way to debug by referencing the original style file where a rule originates from? We pass in a parameter to Stylus to minify the files and enable sourcemapping:

var stylusTask = function () {
    var stylusOpts = {
        compress: true
    };

    return gulp.src(options.stylus.src)
        .pipe(sourcemaps.init())
        .pipe(stylus(stylusOpts))
        .pipe(sourcemaps.write())
        .pipe(rename('bundle.css'))
        .pipe(gulp.dest(options.stylus.dest));
};

How about integrating some useful plugins that allow us to automatically prefix our styles and allow us to use new technology like CSS Grid, CSS Variables, CSS4 features, etc? We can specify which plugins PostCSS should use for the features we want. In our case, CSSNext includes Autoprefixer, as well as a slew of new features:

var stylusTask = function () {
    var stylusOpts = {
        compress: true
    };

    var plugins = [
        cssnext({browsers: ['last 2 versions']}), // we tell autoprefixer to prefix rules to support the last 2 versions of all browsers
    ];

    return gulp.src(options.stylus.src)
        .pipe(sourcemaps.init())
        .pipe(stylus(stylusOpts))
        .pipe(postcss(plugins))
        .pipe(sourcemaps.write())
        .pipe(rename('bundle.css'))
        .pipe(gulp.dest(options.stylus.dest));
};

What if we want to modify the gulp pipeline in specific, local development only cases? We can use gulpif and lazypipe to pipe in extra tasks conditionally:

var stylusTask = function () {
    var stylusOpts = {
        compress: true
    };
    var plugins = [
        cssnext({browsers: ['last 2 versions']}),
    ];
    var devHelpers = lazypipe()
        .pipe(livereload)
        .pipe(notify, function() {
            console.log('CSS bundle-stylus built in ' + (Date.now() - start) + 'ms');
    });

    return gulp.src(options.stylus.src)
        .pipe(sourcemaps.init())
        .pipe(stylus(stylusOpts))
        .pipe(postcss(plugins))
        .pipe(sourcemaps.write())
        .pipe(rename('bundle.css'))
        .pipe(gulp.dest(options.stylus.dest))
        .pipe(gulpif(options.development, devHelpers()));
  }

Lastly, what if we want to run the gulp pipeline in conjunction with other functions, based on our environment setting? We can achieve this by checking our environment setting variable and running the appropriate commands:

var options = {
  stylus: {
    src: './myproject/static/stylus/index.styl',
    watch: './myproject/static/stylus/**/*.styl',
    dest: './myproject/static/css/'
  },
    development: true,
}

if (argv._ && argv._[0] === 'deploy') {
    options.development = false
} else {
    options.development = true
}

var stylusTask = function () {
    var stylusOpts = {
        compress: true
    };
    var plugins = [
        cssnext({browsers: ['last 2 versions']}),
    ];
    var devHelpers = lazypipe()
        .pipe(livereload)
        .pipe(notify, function() {
        console.log('CSS bundle-stylus built in ' + (Date.now() - start) + 'ms');
    });

    var run = function () {
        return gulp.src(options.stylus.src)
        .pipe(sourcemaps.init())
        .pipe(stylus(stylusOpts))
        .pipe(postcss(plugins))
        .pipe(sourcemaps.write())
        .pipe(rename('bundle.css'))
        .pipe(gulp.dest(options.stylus.dest))
        .pipe(gulpif(options.development, devHelpers()));
    }

    if (options.development) {
        var start = Date.now();
        console.log('Building Stylus bundle');
        stylusOpts.compress = false;
        gulp.watch(options.stylus.watch, run);
        return run()
    }
    else
    {
        return run()
    }
};

gulp.task('css', stylusTask);

gulp.task('rebuild', ['css'])

gulp.task('deploy', ['rebuild']);

Final Thoughts

By customizing our CSS bundling process to take advantage of preprocessing and postprocessing options, we can now claim that our front-end packaging does the following:

  1. Accounts for multiple development environments (local, staging, production) by modularizing the CSS Gulp pipeline task.
  2. Uses style preprocessing that allow us to write style rules using familiar programming paradigms.
  3. Uses style postprocessing to ensure feature support and polyfills for all browsers, and enables us to safely implement experimental technology in production-ready settings.

If you found that helpful, we have more CSS and front-end tips on the blog.

Caktus Group2018 Event Shortlist

The Caktus team attends a number of conferences each year to learn about the latest tips and tools. Several of us also go to events to share knowledge as speakers or sprint leaders. Using our varied experiences, we’ve put together a list of the events we’re looking forward to next year.

February

UX Conference - Los Angeles, CA (UX)

NN Group hosts this conference for UX best practices. Our team appreciates the chance to train with industry thought leaders and take advantage of certification opportunities. Courses cover a range of skill levels, from beginner to advanced, so there’s a little something for everyone.

For more information about why you should attend, NN Group has an article including reasons, testimonials, and video. Not able to go to the West Coast? There is also a Washington, D.C. event in April.

March

TestBash Brighton - Brighton, United Kingdom (QA Testing)

Based on a good experience at TestBash Philadelphia, our QA team is excited about next year’s event in Brighton, UK. TestBash is described by the team as an opportunity to make connections and discuss the future of QA.

DisruptHR - Multiple Locations (Management / HR)

Recruiting and retaining top employees is important for any business. This conference is recommended by our HR staff for managers and HR professionals looking to try something new to support, grow, and encourage their teams. It's full of interesting lightning talks on the latest trends in HR with a modern perspective, leaving attendees feeling inspired and ready to approach challenges from a different angle.

Some locations also have events in April.

April

Global Scrum Gathering - Minneapolis, MN (Agile / Scrum)

The event of the year for Scrum masters. Head to Minneapolis in April (or London in October) next year to learn new applications and best practices for Scrum.

Wondering if it’s for you? Scrum Alliance has their list of top 10 reasons to attend.

Quality Jam - Atlanta, GA (QA Testing / Development)

Quality Jam is an event for those looking toward the future of QA testing. It promises to provide real-world solutions to software development challenges. Our team hopes to pick up the latest techniques for testing while getting some hands-on training.

deliver:Agile 2018 - Austin, TX (Agile)

deliver:Agile focuses on the tools and techniques behind Agile engineering and architecture. This conference welcomes not only project managers and developers, but also data scientists, UX and QA professionals, cloud specialists, and more in recognition of the diverse set of skills found on an Agile team.

May

PyCon 2018 - Cleveland, OH (Development)

While there are many great tech, Python, and Django events, PyCon is by far the most anticipated event here at Caktus. Why is it so popular? Our team appreciates the talks, tutorials, and development sprints; enjoys exchanging information on innovating with Python; and picks up insights from other Pythonistas.

There’s also the interpersonal aspect. Each year, Cakti look forward to reconnecting with peers, building new relationships, and uncovering partnership opportunities. The size of the conference, with nearly 3400 attendees in 2017, means that there is ample opportunity to meet Python enthusiasts and community leaders.

Those of our team who attend always pick a few of their favorite talks out of the many good ones delivered and add them to our PyCon Must-see Series. If you’ve never been to PyCon and are looking for a taste of what it’s like, check out those videos.

June

Eyeo Festival - Minneapolis, MN (Development / Data Visualization)

Data gains an extra punch when combined with visuals, and this event has been described by our team as “dataviz heaven”. Topics include everything from gestural computing to data art, so if the intersection of data and design is your thing, take a look at this one.

August

Agile2018 - San Diego, CA (Agile / Project Management)

Our project managers and Scrum master highlight Agile2018 as a conference that provides an excellent opportunity to learn trends and new ideas. This is a good generalist conference for anyone working with Agile and encompasses a wide range of topics.

October

DjangoCon US 2018 - San Diego, CA (Development)

DjangoCon is another staple for the Caktus team. As a Django-focused company, Caktus has sponsored and attended the last eight DjangoCon events as well as sending numerous team members. It’s a smaller conference than PyCon, offering a friendly atmosphere and an inclusive, supportive community of Django developers, with talks on a range of relevant topics. In 2017, those talks included one from a Caktus developer on writing an API for almost anything.

If you develop with Django, want to learn more about the framework, or are looking for Django-driven software vendors, this is a good conference.

All Things Open - Raleigh, NC (Development / Open Source)

When they say “all things open,” they’re not kidding. Open source, open web, and open tech are all covered here. This is a big event, with 3200+ attendees in 2017, so get ready to make new connections in the open community.

One of the other reasons we like this conference is the focus on diversity and inclusion, with initiatives to ensure underrepresented groups can attend.

Check out their list of reasons to go.

TBD

Red Hat Agile Day - Raleigh, NC (Agile / Project Management / QA)

This conference is free and there are always some good talks that inspire our team. This year’s included a presentation by an opera singer, which provided new perspectives in thinking about Agile’s applications. Consider going for a fresh take on Agile.

This event was last held in October 2017.

OnAgile - Online Event (Agile / Project Management)

Another conference presented by Agile Alliance, OnAgile is one of the more affordable events for attendees and accessible for those who can’t catch it live, with recorded sessions for later viewing. This event aims to bring Agile to everyone and was last held in October 2017.

Caktus GroupAWS re:Invent Recap

As a certified Amazon Web Services (AWS) Consulting Partner, Caktus sent a member of the team to AWS re:invent this year to meet other solution providers, discuss with AWS representatives how to leverage our partnership to best serve our clients, and of course, get hands-on experience with both existing and newly-revealed AWS services.

With nearly 40,000 attendees, 1,000+ sessions, and 40 tracks, all spread out across multiple venues, it was by far the largest conference I have had the privilege of attending. As a first time attendee, I found the conference’s mobile application critical for making the most of the experience.

Conference organizers did a fantastic job of adding overflow and repeat sessions for popular topics. It probably comes as no surprise to learn that serverless, containers, and the Internet of Things (IoT) seemed to attract the most attendees. If you were unable to attend in person, or were there and missed interesting sessions, Amazon promptly made the sessions available on YouTube.

The Global Partner Summit provided a one-stop location to interact with other partners and attend breakout sessions related to the partner experience. It was great hearing how other solution providers tackle similar problems, such as repeatable, maintainable deployments, and learning about the 2018 roadmap for the AWS Partner program.

Caktus has utilized AWS as part of many clients’ solutions, such as iN DEMAND’s digital archiving system and University of Chicago’s online survey platform. Interested in learning more about how Caktus can assist you with your AWS and project needs? Contact us to get started.

Caktus GroupCaktus is Excited about Django 2.0

Did you know Django 2.0 is out? The development team at Caktus knows and we’re excited! You should be excited too if you work with or depend on Django. Here’s what our Cakti have been saying about the recently-released 2.0 beta.

What are Cakti Excited About?

Django first supported Python 3 with the release of version 1.5 back in February 2014. Adoption of Python 3 has only grown since then and we’re ready for the milestone that 2.0 marks: dropping support for Python 2. Legacy projects that aren’t ready to make the jump can still enjoy the long-term support of Django 1.11 on Python 2, of course.

With the removal of Python 2 support, a lot of Django’s internals have been simplified and cleaned up, no longer needing to support both major variants of Python. We’ve put a lot of work into moving our own projects forward to Python 3 and it’s great to see the wider Django community moving forward, too.

In more concrete changes, some Caktus devs are enthused by transitions Django is making away from positional arguments, which can be error-prone. Among the changes are the removal of optional positional arguments from form fields, removal of positional arguments form indexes entirely, and the addition of keyword-only arguments to custom template tags.

Of course, the new responsive and mobile-friendly admin is a much-anticipated feature! Django’s admin interface has always been a great out-of-the-box way to give staff and client users quick access to the data behind the sites we build with it. It can be a quick way to provide simple behind-the-scenes interfaces to control a wide variety of site content. Now it extends that accessibility to use on the go.

What are Cakti Cautious About?

While we’re excited about a Python 3-only Django, the first thing on our list of cautions about the new release is also dropping support for Python 2. We’ve been upgrading a backlog of our own Django apps to support Python 3 in preparation, but our projects depend on a wide range of third-party apps among which we know we’ll find holdouts. That’s going to mean finding alternatives, pushing pull requests, and even forking some things to get them forward for any project we want to move to Django 2.0.

Is There Anything Cakti Actually Dislike?

While there’s a lot to be excited about, every big change has its costs and its risks. There are certainly upsets in the Django landscape we wish had gone differently, even if we would never consider them reasons to avoid the new release.

Requiring ForeignKey’s on_delete parameter

Some of us dislike the new requirement that the on_delete option to ForeignKey fields be explicit. By default, Django has always used the CASCADE rule to handle what happens when an object is deleted and other objects have references to it, causing the whole chain of objects to be deleted together to avoid broken state. There have also been other on_delete options for other behaviors like prohibiting such deletions or setting the references to None when the target is deleted. As of Django 2.0, the on_delete no longer defaults to CASCADE and you must pick an option explicitly.

While there are some benefits to the change, one of the most unfortunate results is that updating to Django 2.0 means updating all of your models with an explicit on_delete choice…including the entire history of your migrations, even the ones that have already been run, which will no longer be compatible without the update.

Adding a Second URL Format

A new URL format is now available. It offers a much more readable and understandable format than the old regular-expression based URL patterns Django has used for years. This largely a welcome change that will make Django more accessible to newcomers and projects easier to maintain.

However, the new format is introduced in addition to the old-style regular-expression version of patterns. You can use the new style in new or existing projects, and you can make the choice to replace all your existing patterns with the cleaner style, but you’ll have to continue to contend with third-party apps that won’t make the change. If you have a sufficiently large enough project, there’s a good chance you’ll forgo migrating all your URL patterns.

Maybe this will improve with time, but for now, we’ll have to deal with the cognitive cost of both formats in our projects.

In Conclusion

Caktus is definitely ready to continue moving our client and internal projects forward with major Django releases. We have been diligently migrating projects between LTS releases. Django 2.0 will be an important stepping stone to the next LTS after 1.11, but we won’t wait until then to start learning and experimenting with these changes for projects both big and small.

Django has come a long way and Caktus is proud to continue to be a part of that.

Caktus GroupCaktus Discovery Workshops

Before an app can be built, the development team needs to know what they are supposed to be building. How do they establish that? With requirements gathering.

Requirements gathering

Product discovery, or requirements gathering, happens on every development project. This isn’t a service, but rather an internal process at a development company. Some of it must be carried out before anything can be designed or built, and some of it happens throughout the development project. While it may seem that this just adds time to the project, it is vital to delivering a product that meets the project objectives.

Requirements gathering may be as simple as having the client stakeholder, project manager, and developers review existing documentation and materials. However, often there is much more preparatory work to be done in order to build a solution that addresses the client’s business goals and the end user’s needs.

Product discovery ensures that all client stakeholders and the product team are in alignment on what is being built and why. This blog post explains the early stages of product discovery in more detail, but the process may include the following steps:

  • A review of the business and project goals.
  • A competitive landscape review, to gain an understanding of what has already been done and how well it’s working.
  • In the case of content-heavy websites, a content audit to determine what is available and how users are intended to interact with it.
  • A discovery workshop to determine requirements in greater detail.

Discovery workshops

Some projects need greater definition than is available at the beginning. They may lack documentation, or it may have become clear at some point in the sales process that the client has a great idea, but isn’t quite sure how to build it yet. Lack of consensus with or buy-in from other teams or departments on the client’s side may also be an issue.

If that’s the case, one tool to use as part of the initial discovery phase is a discovery workshop. The way in which the workshop is carried out is unique to each client and depends on the goals and budget of the project, but at Caktus we recommend starting with one of two techniques: user story mapping or content modeling. The technique used depends on whether the project is to build a web app or to develop a customer-facing marketing website.

What’s the difference? With a web app, the focus is on completing tasks, such as data input or interacting with the website to post an update. For a marketing website, the objective is to deliver content. Users must be able to easily locate content such as videos, PDFs, or even simple blog posts, and take the desired actions to consume it (i.e., read, bookmark, download, or share).

Let’s look at how user story mapping and content modeling form the basis of a discovery workshop for web apps and websites.

User story mapping

For web app development projects, user story mapping is essential to giving design, coding, UX, and testing teams an understanding of user flows, user tasks, and client priorities. It also ensures that essential features haven’t been overlooked.

User story mapping is a technique used to map out the user flows and tasks an app must support. A top-level flow of user actions (the narrative flow) is identified first. Next, the different tasks and subtasks necessary to accomplish the top-level actions are laid out beneath. Finally, tasks are sorted above or below a prioritization line to establish the most valuable features for inclusion in a minimum viable product (MVP).

A user story map with priority line indicating the most valuable features. A diagram of a user story map.

The greatest value in carrying out user story mapping is building a shared understanding between Caktus and client teams around the features the application must support to deliver business and user value, and the order of priorities.

It also reduces the amount of guesswork that goes into estimating the time and money required to complete the project. It enables the team to estimate coding, UX, and QA work with more confidence, providing better value for money and a more accurate scope of work.

If the client decides to move forward with developing the project, an additional bonus lies in the ability to translate the map into user stories and to create a prioritized development backlog. This is the list of tasks that the team will focus on developing. The project manager organizes those tasks using existing data about the team’s pace and the information gained during requirements gathering.

Read more about user story mapping and how it is translated into a release plan.

It is important to note, however, that only an initial prioritization is done based on user story mapping. In Agile development, there is always room to update and re-prioritize tasks, so it shouldn’t be assumed that the backlog established at the beginning of a project is the final one or that all of the tasks listed at the start will be completed if there are changes to the project along the way. The project manager works with client stakeholders to ensure that any changes to budget, deadline, and desired features are appropriately accounted for in prioritization.

Content modelling

For marketing websites intended to deliver content, a discovery workshop focused on content modeling provides a more detailed understanding of how the website should be structured in order to facilitate content delivery.

For an existing website, a content audit is a necessary prerequisite. A spreadsheet detailing the following is a good place to start:

  • Different content types, associated page types, and file formats
  • The target audience
  • Desired user actions (e.g. watch, download, interact)
  • Intended placement on the website
  • How they will be updated and who will carry out the updates
  • Any other relevant notes such as priority, future plans, or preferences

A content modeling workshop helps refine content types and their relationships. It starts with asking questions about the needs users have when they come to the website, identifying nouns used to describe user needs and goals, and analyzing which content types connect to each other and how.

Content types are then broken down into chunks in the process of asking what content facets each content type is comprised of, and how those chunks could be best developed to support display across various screen sizes. This activity sets up the client stakeholders for the final tasks of writing new or amending existing content, which they do independently after the workshop.

For a new website without fully developed content, stakeholder interviews are a good method to generate the information needed to begin understanding what content might be appropriate to support user goals.

Other methods and techniques

Projects with more time and budget could include other activities. For example, diagramming the application architecture in addition to user story mapping helps in understanding relationships between an otherwise linear representation of user flows within a user story map. Ideation can help generate ideas for a new application, while sketching can help identify solutions for existing or new interfaces.

Any of the techniques mentioned in this post can be carried out individually or in conjunction with the others. They can be done outside of a workshop as well. However, our experience at Caktus is that a discovery workshop pulling in all of the stakeholders is most effective at getting to the heart of a project.

It should also be mentioned that while a discovery workshop is done at the beginning, the process of discovery doesn’t end when development begins. It occurs throughout the course of the project, especially when the project follows Agile methodologies.

Why do a discovery workshop?

Why spend extra time and money on a discovery workshop when you already know what you want?

It’s true, not every project needs a discovery workshop as part of the initial discovery phase. When clear documentation, priorities, and scope are available, sharing those and having a conversation may be the extent of what is needed for requirements gathering.

We’ve found that the best candidates for a discovery workshop are those projects where:

  • Documentation is available for an existing version of the app or website, but significant changes are desired for an updated version.
  • The project is complex in terms of dependencies, the number of interactions, or data structuring.
  • Teams on the client side are unsure how best to proceed, or have conflicting visions of what features would best fulfill user needs and/or business objectives.
  • The target users and key user tasks and flows haven’t been mapped out.

If one of those sounds familiar, or if you’re generally interested in finding out more about discovery workshops at Caktus, get in touch and tell us about your project. Still researching? Try this post about getting started with outsourced web development.

Caktus GroupDeveloping Sharp Interns

Our internship program sustains Caktus’ growth, challenges and reinvigorates our development practices, builds our relations with the local tech and wider Django communities, and hones our operational practices as a company. This post shares our guiding principles for how we structure our developer internship to achieve these goals, while providing a meaningful and edifying experience for the interns we hire.

Put in the Necessary Time and Resources

Long before we even begin recruiting, hiring, and onboarding a candidate, our team puts in extensive prep work in anticipation of two to three interns a year. We are detailed in our search, set aside a specific portion of our recruiting budget for the position, and cast a wide net. Considerable time and resources are devoted to finding an ideal candidate. The reasons for this are many:

  • It is costly and disruptive to hire the wrong person; we want to get it right.
  • Our internship partially functions as a pipeline for identifying local talent. We have to look at each and every candidate as though he or she could be joining our team full time.
  • Having a paid internship is one way to open the gates of the tech industry to those who have traditionally been shut out. It makes tech jobs more accessible to a wider pool of diverse talent. We want to get this opportunity in front of as many people as possible.
  • Our internship is unique in that it is fairly flexible and self-driven. It takes a candidate with a sufficient level of independence and moxie—balanced by the humility to ask for help when needed—to make this structure work. Finding such a candidate requires a significant amount of effort.

Treat Each Intern Like an Employee

Central to the success of our internship program is a deceptively simple tenet: treat each and every intern like an employee. It seems obvious, but many companies do not do this. For us, it is the most important element to an internship. Not only does it create an atmosphere for growth, but it also accomplishes the actual goal of an internship: introducing a novice to the real experience of working as a part of a development team.

Real Teams, Real Work

Rather than siloing our interns onto separate teams and assigning them busy work or the task of creating tools that will never be used again, Caktus interns are placed on a real team with our full-time developers. They are wholly integrated team members, taking part in all Scrum activities and any other team-related meetings.

Like any other developer on their team, interns self-select their work during sprint planning. They participate on real projects that will continue to be used and added to. They are doing work others will need to use later, learning best practices for writing clean, scalable code. At heart this means that our internship is not an academic experience, it is a practical one. We have found that this practicality serves as the best atmosphere in which an intern can grow.

And what do we get from instilling such trust in our interns and bringing them on as full-time team members? Having an intern fully participate on a development team encourages a more collaborative culture of mentorship in which questions are welcome and everyone remains open to fresh perspectives.

“I was encouraged to review my teammates' code, and my comments were taken seriously. I was always respected as a valuable part of the team.” - Charlotte Mays, Intern 2016 / now full-time developer

The Full Gamut of Operational Processes

An internship is a great way to practice, solicit feedback on, and fine-tune operational processes. Our internship program has been a great way for us to improve our interviews as well as our candidate screening and hiring practices. From onboarding to exit interview, we take our intern through the full process like any other employee. Not only does this give the intern necessary career experience, but it also creates a helpful feedback loop for internal process improvement.

Other Elements for Growth

Of course, treating an intern like an employee requires a lot of trust as well as the proper environment for success. Our interns themselves need to be sufficiently driven and sufficiently humble, and the structure of our program needs to support this balance. We have found the most success in allowing a self-determined and malleable learning plan, while providing the mentorship necessary to lend structure and direction.

Flexibility

We often describe our internship as a “choose your own adventure.” Not only do we remain flexible in terms of start and end date and work hours, but also in regards to the tasks an intern may take on. As mentioned above, interns self-select the tasks and features they will work on. This requires a delicate balance between:

  1. Selecting tasks they are capable of completing in a development sprint and,
  2. Selecting features that will challenge and develop their current skill set

Mentorship

To achieve this delicate balance, guidance through mentorship is key. Every intern is assigned a mentor from day one of their internship. Interns and mentors meet regularly to set, discuss, and track progress on goals, give and receive feedback, and evaluate personal and professional growth.

We hire interns with a variety of goals: from students still in school exploring web development as a potential career, to young adults fresh out of school seeking to enter the technology sector, to individuals seeking a career change after having worked in other industries. Whatever the context, we make sure to cater our mentorship program to each intern’s self-identified goals.

Improve the Community

To create a truly meaningful experience, it is important to keep our own end goals in mind as a company. Why are we doing this in the first place? We built our internship program to provide an opportunity that both personally rewards a learning developer and also improves the Django and local tech community. This means we focus on two main goals throughout the internship:

  1. Instilling development best practices: mentoring developers who will go on to write code the right way.
  2. Imparting Caktus’ values: producing developers who will be curious, empathetic, seek excellence, and give back to their community, whether that be through open source contributions or future mentorship.

Ultimately, we love helping to mentor and grow developers in the community, and our internship program is a key part of that effort.

Learn more about our program and what it’s like to be an intern at Caktus from a former intern’s perspective.

Caktus GroupGetting Started with Outsourced Web Development

In researching outsourced web development, you may have come across a few different ways to get your project built and have some questions as a result. How well defined do the project requirements need to be prior to starting development? Will Waterfall or Agile methods deliver the best results? Should you look for a consultancy offering team augmentation or in-house Agile-based work? What are the ramifications for your project of picking one or the other?

Let’s take a look at each of these questions, and what we recommend for different projects.

Project definition

Moving forward with a project happens when three key pieces of information are known, including:

  • Budget: How much are you willing and able to spend?
  • Timeline: How quickly do you need the final deliverable?
  • Project requirements, e.g. a product roadmap, release plan, and/or defined MVP: Do you have a clear idea of what you want to build?

With this knowledge, a project can be estimated, giving you a better idea of how much can be built for your budget, whether there are time- or cost-saving alternatives, and whether additional or different work could add value to the project.

If you don’t have a timeline or budget, but do know what you want to build and can provide requirements and documentation, a team can evaluate the project and provide a cost and projected timeline.

Or, perhaps you know your timeline and budget but are still working on the third piece: clearly defining what exactly you are trying to build. Even if you think this part is figured out, it often happens that stakeholders have different visions for the project and lack a shared understanding, which can be time-consuming and costly to address later.

How do you check that everyone involved in the project really does have the same understanding of what will be delivered? A discovery phase is an excellent first step.

The discovery phase of a project consists of steps aimed at gaining a deeper understanding of the product, including its contexts, its users, and the business goals it is meant to support. One approach employed during the discovery phase is a discovery workshop, which may include a number of activities aimed at determining what should be developed and what the priorities are.

In the discovery workshop, the process of product discovery aids in framing the problem the product should solve; identifying user roles; mapping out user actions, tasks, and workflows; and finally sketching out ideas for a product that addresses each of those steps based on the unified vision gained from the workshop. Furthermore, techniques like user story mapping contribute greatly to building a high-level release plan that clearly prioritizes the most valuable features for development and gets the project off to a strong start.

Waterfall or Agile?

Once you know what you want to build, how quickly you need it, and how much you can spend, it’s time to look at the different ways of developing the application or website.

Waterfall and Agile are both methodologies, or processes, to guide software design and development. The principles of each methodology inform how the project is managed in terms of how it moves through each phase of development, how and when feedback is received and implemented, and when testing is carried out.

One of the main differences between the two methodologies is that Waterfall follows a linear model, where each succeeding phase is started only after the previous has been completely finished. In this model, the client doesn’t see any of the work until the project nears completion. The different team roles (designers, developers, quality assurance, and so on) don’t collaborate throughout the project, only seeing what the other team has built when it’s their turn to begin, and testing is carried out at the end.

Agile follows an iterative model. In iterative software development, work is broken into chunks that can be completed in a short time frame, with testing ongoing throughout. At the end of each iteration, the goal is to have a potentially complete product increment which can then be built on as needed.

Another difference between the two methodologies is that Agile development considers change to be part of the process. With Waterfall, it can be increasingly difficult to make changes or implement feedback as the project approaches completion. By the time the project is shared for review, it may be too late to make adjustments. In contrast, Agile teams produce usable software to give feedback on throughout the process and are able to implement that feedback more easily.

At Caktus, we use Agile frameworks like Scrum and Kanban to develop projects because it enables us to act on feedback and ensure we’re delivering the most valuable features first, a tenet of the Caktus Success Model. It also ensures that we’re focusing on those features which have been prioritized as most important by the client, even when priorities change.

What does this mean for our clients? In short, rather than asking a client to pay for work and then wait until the end of a project to see the results, we present production-ready features on a regular basis. That work can then be evaluated by client stakeholders, and feedback can be prioritized and implemented throughout the project in a continuous loop.

It’s worth mentioning that some level of flexibility in at least one of the three elements of a project as defined in the first section - scope, time, or budget - is necessary to work on Agile development. It is this flexibility that enables a team to accommodate any changes that may arise and to respond to client feedback as the project progresses.

Client-managed project, or vendor-managed project?

In addition to the methodologies themselves, there are a few different ways to manage the projects. If you have an internal development team and project manager (PM), client-managed team augmentation may be an option. This is most feasible when the need for staff is temporary and one or a few roles are needed for a set number of hours per week to support the on-time completion of a project.

Team augmentation is most effective when a clear product roadmap is in place and you have an internal PM. If you lack a project manager or aren’t entirely sure what tasks need to be carried out and when, it’s common for a contractor to lack clarity on what tasks take priority and how they should be spending their hours. In that case, a more effective option may be an Agile-based project entirely contracted out to a custom development firm.

In this scenario, the external team is responsible for maintaining the backlog of tasks and features (with your input and feedback on priorities), determining what is worked on in each development period, and building and testing the work. All tasks, including project management, development, and quality assurance testing, are carried out by the external team.

That doesn’t mean the project is out of your hands. While working with the external team, you still play a key role as a stakeholder. The stakeholder stays in touch with the team, giving feedback and communicating priorities, and maintains the overall vision of what should be produced. There are regular opportunities to see progress as well as to communicate what is going well, what can be improved, and how well expectations are being met. This enables the team working on the project to deliver a product aligned to your specifications and objectives.

Get started

Ready to move forward with development? Caktus offers discovery workshops, team augmentation, and Agile-based development services. Even if you’re still unsure what will work best for your project, our experienced team can help determine which solutions will be most effective. Contact us to get started.

Caktus GroupShipIt Day Recap Q4 2017

Our quarterly ShipIt Day has come and gone, with many new ideas and experiments from the team. As we do every quarter, Caktus staff stepped away from client work to try new technology, read up on the latest documentation, update internal processes, or otherwise find inspiring ways to improve themselves and Caktus as a whole. Keep reading to see what we worked on.

Style and Design

Last ShipIt Day, our front-end and UX designers started work on a front-end style guide primer. Work continued on this ShipIt Day, with Basia working on typography and color. The guide now includes documentation explaining the handling of font properties and responsive font sizes, typography selectors and how they render with given settings, and guidance on how font sizes can be modified according to different placements or needs. The color palettes now show how colors should be displayed and used in context.

Basia also started learning CSS Grid with the Grid Garden game and delivered a talk at Iron Yard called “Design Primers for Devs.”

JIRA Improvements

JIRA is one of the project management tools we use at Caktus, and a recent update changed the setup of our account. Sarah and Charlotte F worked to ensure that all of our projects and boards are tidy and reduced complexity in access, then demoed the changes to the team.

Educational Content

Recognizing a gap in content to help potential clients learn more about how we work at Caktus, Julie and Whitney plotted out the Sales process and brainstormed visual methods for presenting the information.

Photo Booth App

Neil looked into learning or refreshing his knowledge of a few different technologies by building a progressive web app (PWA) that he could use to scan barcodes. That idea morphed into a photo booth app, which provided an opportunity to learn how to access a laptop camera from within the browser. He also looked at using IndexedDB for storing blobs of binary data generated by the photos taken. Creating the app required manipulating canvas image data to produce a glitched effect and using React in concert with all of these, plus the Materialize CSS framework.

API Star

Mark took a look at API Star, a web API framework for Python 3. He dove into the type system and route definitions, finding that they can do useful things out of the box, like automatic validation of inputs, HTTP method based routing, and a simple path matching syntax like the upcoming changes in Django 2.0. The framework also allows setup of authentication, another helpful feature. While the immaturity of the project shows at this time, it had some interesting use of type annotations and promise for improvements in the future.

In addition to API Star, Mark worked on improving a project test suite, identifying the reason why it ran so slowly and speeding it to a fraction of the previous time. This required learning a bit more about Factory Boy and making better use of the SelfAttribute to reduce the number of models created when using sub-factories.

Prometheus

One of the services we offer at Caktus is managed hosting. To ensure that we’re using the best technology, Scott decided to evaluate Prometheus, an open source monitoring software. He found that it was fast and easy to get a server back up but is still evaluating whether it’s a fit for Caktus.

Dokku and AWS Web Stacks

Colin recently helped redeploy a client website using Dokku and wanted to try out our AWS Web Stacks project to see if it could be used with Dokku for a Code for Durham project. One of the challenges he encountered was the use of PostGIS geodata in the project, which needed to be configured within Dokku and imported. However, Dokku’s simple interface and automatic requirements installation meant that everything started working nicely.

He thinks that for projects that don’t need a lot of web servers, Dokku is a good alternative for projects needing a lot of web servers.

Ansible

Documentation is important to software development, so Jeff used his ShipIt Day time to look into creating a unified set of documentation for our Ansible roles and tooling. He also worked with Dmitriy, Phil, and Vinod as they delved into Ansible.

Hello Tequila

Dmitriy revisited the Hello Ansible app and used it to learn more about Tequila, by setting up a basic Django project and using Tequila repos to deploy it. He took notes on how to set up a project and deploy it, finding a few bugs in the process. Next ShipIt Day, he’s hoping to create a readme or walkthrough.

Vue.js

While attending the CSS Dev Conference in New Orleans, Kia heard about a new JavaScript framework called Vue.js. For her ShipIt Day Project, she decided to build a Twitter-like app to try it out.

Kia liked how the exercise required her to look at a design feature and break it down into its individual components, then recreate and reassemble them. She sees the framework as potentially valuable for website relying on reusable components, and admires its focus on modularity and scalability.

QA Training

Our QA team took advantage of their ShipIt time to review training videos and materials for American Software Testing Qualification Board, with the aim of reaching expert level. Robbie hopes that the certification will give him the skills and industry recognition to further his career as a QA analyst. Gerald was pleased to find that real world examples were used, and that the curriculum design enables testers to make connections between the certification courses and the real world.

Book Club App

For our Q2 2017 ShipIt Day, Charlotte M started on an app to help the Caktus Book Club vote on the next book to read. This ShipIt Day, Dana joined her in improving the app’s features and usability. Together, they added the ability to edit or delete a book from the list, and updated elections so that they can be deleted if need be. Previously, elections used to have a set open date that couldn’t be changed, but that has been edited so that books can continue to be added before setting a date.

They also planned out functionality to add for the next ShipIt Day, including tracking respondents, improved list navigation or search, and more graceful error handling.

CloudFormation and AWS Web Stacks

Tobias continued to build on his Amazon Web Services expertise with ongoing work to our AWS Web Stacks project. He added a CloudFront Distribution for the app server to take advantage of its front-end caching capabilities and an Elastic Load Balancer to the Dokku stack.

AWS Web Stacks is open source, and you can find the code on GitHub or learn more about it in Tobias’ recent post about automating a Dokku setup with AWS Managed Services.

Django Cache Machine

Vinod worked with Tobias to add long-overdue support for Django 1.9, 1.10, and 1.11 to the open source project Django Cache Machine, which provides automatic caching to Django models. In the process, they learned a lot about the details of Python iterators.

Caktus GroupThe Opera of Agile: A Striking Performance at Red Hat Agile Day

Have you ever heard anyone sing opera during a tech-focused conference? Neither had I, until now.

Red Hat Agile Day, held in downtown Raleigh, recently provided this unique opportunity. The theme of the 2017 Red Hat Agile Day was “Agile: brokering innovation; bringing together great ideas.” The conference certainly lived up to that theme with a diverse line-up of speakers, including a former professional opera singer who bookended his presentation with songs. One was a creative, original ballad about being an Agile product manager (see the lyrics here), which he delivered at full blast, because how else can you sing opera?

The exuberant vocal performances by Agile Product Manager Dean Peters certainly took the attendees by surprise - the shocked looks around the room were priceless. Instantly, I knew this was not going to be the average presentation on the Agile mindset, process, or procedure. Peters’ presentation on “Five Things I Learned About Lean MVP as a Professional Opera Singer” was not only entertaining but also informative. His delivery also makes it one presentation I won’t easily forget.

Peters compared his experiences as an opera singer to those as a coder and Agilist, making connections between the stage and the computer screen. He explained that operas are produced iteratively, being an aggregate of many small components, with production milestones, and a release plan. Yup, definitely sounds like Agile software development.

He also went into detail about how developing a stage character is like developing a user experience persona for a website, and that doing so could increase empathy and understanding for clients, stakeholders, and end users, ultimately improving the Minimum Viable Product (MVP). I agree that creating personas is a valuable practice since it helps the product owner and the development team to better understand the audience and end-users, ultimately leading to a product that’s more tailored to the end-user’s needs. To help construct personas, Peters recommends using characterization tools from theater and leveraging acting exercises and games to gain empathy. I plan to keep these exercises in mind and hope to use them at Caktus.

Practice is another key element that Peters highlighted. Just as actors practice for a performance, developers should practice for a client or sprint demo. Peters elaborates on this in a blog post on “10 things singing opera taught me about product demo prep.”

The Broader Applications of Agile

As it turned out, the connection between opera and Agile development wasn’t as much of a stretch as I thought it was, and Peters’ comparisons were insightful and easy to follow (his slides are available online). It made me realize how inclusive and universal the Agile mindset is, and how applicable it is to other professions, not just software development. In reality, it is probably already being applied without us even realizing it, like in the writing process. Writing this blog post, for example, was an iterative and Agile process, broken down into phases which could be compared to sprints - drafting, reviewing, editing, finalizing, and then releasing.

While realizing the broader applicability of Agile, a statement on the Red Hat Agile Day website struck me. It challenged attendees “... to connect the ideas and insights you'll be gathering for new innovation.” The sharp team of seven Caktus project managers and quality assurance analysts who attended the conference have already discussed some ideas that were spurred by the various presentations at Red. For example, we’re looking into Acceptance Test-Driven Development (ATDD), which was presented by Ken Pugh. It would provide a different way for Caktus to view testing and would help developers and testers to better understand our customer’s needs prior to software implementation. While ATDD is not new, it would be new for Caktus and would result in an altered workflow and a shift in mindset regarding testing. If we move forward with it, it will be interesting to see the results of this Agile innovation.

Caktus GroupWhite Space Explained

What White Space Is

In the context of web design, white space (or negative space) is the space around and between elements on a page. To non-designers, it may seem unnecessary or an expression of a particular aesthetic (and therefore non-essential to a web page). To designers, it is an essential tool to increase the comprehension of a composition and guide a viewer’s attention and focus.

What White Space Does

While white space may evoke a sense of elegance and sophistication, that is not its primary purpose from the perspective of user experience. White space helps the user understand the interface without undue effort; it reduces cognitive load and, as a result, greatly improves the quality of the experience.

Micro white space -- the white space between smaller elements on the page (e.g., characters or lines of text) -- improves legibility.

Using micro white space to improve legibility in smaller web page elements. Screenshots of two versions of the same web page with different amounts of micro white space in line height): image on the left shows insufficient line height, image on the right shows a comfortable line height.

Macro white space -- the white space between and around larger interface elements (e.g., paragraphs of text or graphics) or groups of elements (e.g., a section of an article or a web form) -- helps direct attention to those elements and improves comprehension. For example, a study by Dmitry Fadeyev demonstrated a twenty percent increase in comprehension due to proper use of white space between and around text elements.

Using macro white space to improve comprehension of a page. Screenshots of two versions of the same web page with different amounts of macro white space (margins between paragraphs and list items): image on the left shows lack of margins, image on the right shows paragraphs and list items separated by margins.

How White Space Works

White space works to improve legibility and comprehension in three major ways. It:

White space reduces cognitive load by increasing scannability of a web page

Properly applied white space supports scannability, an objective long postulated by Nielsen Norman Group (NNGroup). The results of their study from 1997 still hold true today: most users do not read web pages word-by-word. Instead, they scan them for specific words and sentences.

Separating chunks of text by a sufficient amount of white space makes scanning easier, thus decreasing the amount of strain the user experiences in search for content they seek on a page.

White space clarifies relationships by fostering the perceptual principle of proximity

Two Gestalt principles of perception -- proximity and figure-background -- rely on white space.

The principle of proximity states that “objects that are closer together are perceived as more related than objects that are further apart”[1]. That means that by increasing white space between elements on the page, we signal to the user that those elements are less (or not) related to one another. By bringing elements closer together, we indicate they have a closer relationship.

Consider the following web form example. The even spacing between form elements on the left offers no visual cue about their relationships. On the other hand, an increase in white space between form sections on the right (accompanied by an appropriate use of headings) makes relationships between those elements much clearer.

Demonstration of how white space improves form comprehension.

White space guides attention and focus by strengthening visual cues that support figure-background separation

The figure-background principle states that “elements are perceived as either figure (the element in focus) or ground (the background on which the figure rests)”[2]. Whether we perceive something as a figure or its background is a result of how our brain interprets cues carried by objects of perception. Size, color, and edge are among visual properties that help us interpret an object as a figure. And a figure is where our attention tends to focus. By skillfully applying white space, we can therefore direct the user’s attention to parts of a layout we want them to look at, and guide them through an interface to tasks we want them to complete on a page.

Consider two search engine pages shown below. The amount of white space the Google page employs leaves no ambiguity about what user action should be taken.

Comparison of white space as used by search engines Yahoo and Google. Screenshots of search engine pages: Yahoo on the left, Google on the right.

Conclusion

At an intersection of design and development, compromises must be made to meet budgetary constraints and deadlines. At the same time, we must recognize that many design choices in areas of web and user interface design are about minimizing cognitive load and facilitating comprehension. The long-term benefit of retaining and attracting users outweighs short-term cost of a precise implementation of design choices critical to quality of user experience. White space is an important aspect of improving user experience, but it’s not the only one. Learn more about principles of good user experience from an earlier blog post.


1: “Design Principles: Visual Perception And The Principles Of Gestalt: Proximity,” Steven Bradley, Smashing Magazine, March 28, 2014.

2: “Design Principles: Visual Perception And The Principles Of Gestalt: Figure/Ground,” Steven Bradley, Smashing Magazine, March 28, 2014.

Caktus GroupCSS Tip: Fixed Headers and Section Anchors

Fixed headers are a common design pattern that keeps navigation essentials in easy reach as users meander down a page. Keeping a header fixed as the user scrolls can free up horizontal space for smaller devices by avoiding sidebars, and keeps your branding visible.

Anchors are another important navigation tool, linking not to a page but to a specific location in it. Whether for a long article, multiple parts of documentation, or navigation within a page broken up into sections, anchors can help users navigate directly to the part of a page they want to see.

Linking within a page is a natural case for using a fixed header. Users who follow links from other websites and land directly on an anchor on your web page can’t see your branding, your site navigation, or even what site they’ve landed on. Introducing a fixed header helps them see where they’ve navigated to, no matter where they’re taken on the landing page.

Unfortunately, internal linking and a fixed header pose a problem when used together.

The Problem

A fixed header overlapping a target.

Here we see a simple <a name=”target”> anchor, which ends up behind our header, made translucent here for demonstration purposes. This happens because the browser navigates to the anchor by scrolling directly to it, but scrolling that far down puts the anchor visually right under the header. That’s a problem.

The Goal

A target behaving as desired under a fixed header.

This is what we want to see, with our anchor appearing just below the fixed header. The anchor is outlined in blue. You can see here how the section before the anchor is properly behind the fixed header, and the anchor is positioned just under it as if the top of the page starts just at the header’s bottom edge.

The Trick

We can make this happen with a little CSS trick. First, look at where we actually want the top of the page to appear so that our anchor appears in the right place.

Implementing the targeting trick

To make this happen we’re going to trick the browser into thinking the anchor is shifted above the visual location, by exactly the same height as the header we need it to appear under.

Just to set a baseline for this, we’ll look at how the header we’re working around is actually setup.

header {
  width: 100%;
  background: lightblue;
  padding: 10px;
  margin-bottom: 10px;
  position: fixed;
  height: 30px;
}

The header is styled to affix itself to the top of the window as the user scrolls. This header is 30 pixels tall, has a 10 pixel padding, and a 10 pixel margin on the bottom to separate it from the rest of the page content a bit. The box layout of the header is illustrated below.

The Details

The box model showing how the target and fixed header are spaced.

If our “real anchor” needs to line up with the header and our “visible anchor” needs to appear just below it, then we need to position them apart by the total of the header’s height, margin, and padding. In our case, that makes an offset of 60 pixels.

Here’s our anchor’s styling:

.anchor a {
  position: absolute;
  left: 0px;
  top: -60px;
}

The visual label doesn’t need any special styling, but we do need to arrange them as siblings in the markup with a container. Not the &nbsp; in the otherwise empty anchor! This is important, as the browser won’t see the anchor as valid without some contents to navigate to.

<div class="anchor">
  <a name="target"> </a>
  <h2 class="target-label">I am a good target</h2>
</div>

And the container just needs to be made a positioned element to allow our hidden anchor to be positioned relative to it, along with the visual label.

.anchor {
  position: relative;
}
The end result of implementing the target and fixed header as explained.

You can see the whole effect demonstrated on CodePen.

For more front-end tips, check out this post about making a jQuery, or this one about CSS Grid versus frameworks.

Caktus GroupAutomating Dokku Setup with AWS Managed Services

Dokku is a great little tool. It lets you set up your own virtual machine (VM) to facilitate quick and easy Heroku-like deployments through a git push command. Builds are fast, and updating environment variables is easy. The problem is that Dokku includes all of your services on a single instance. When you run your database on the Dokku instance, you risk losing it (and any data that's not yet backed up) should your VM suddenly fail.

Enter Amazon Web Services (AWS). By creating your database via Amazon's Relational Database Service (RDS), you get the benefit of simple deploys along with the redundancy and automated failover that can be set up with RDS. AWS, of course, includes other managed services that might help reduce the need to configure and maintain extra services on your Dokku instance, such as ElastiCache and Elasticsearch.

I've previously written about managing your AWS container infrastructure with Python and described a new project I'm working on called AWS Web Stacks. Sparked by some conversations with colleagues at the Caktus office, I began wondering if it would be possible to use a Dokku instance in place of Elastic Beanstalk (EB) or Elastic Container Service (ECS) to help simplify deployments. It turns out that it is not only possible to use Dokku in place of EB or ECS in a CloudFormation stack, but doing so speeds up build and deployment times by an order of magnitude, all while substituting a simple, open source tool for what was previously a vendor-specific resource. This "CloudFormation-aware" Dokku instance accepts inputs via CloudFormation parameters, and watches the CloudFormation stack for updates to resources that might result in changes to its environment variables (such as DATABASE_URL).

The full code (a mere 277 lines as of the time of this post) is available on GitHub, but I think it's helpful to walk through it section by section to understand exactly how CloudFormation and Dokku interact. The original code and the CloudFormation templates in this post are written in troposphere, a library that lets you create CloudFormation templates in Python instead of writing JSON manually.

First, we create some parameters so we can configure the Dokku instance when the stack is created, rather than opening up an HTTP server to the public internet.

key_name = template.add_parameter(Parameter(
    "KeyName",
    Description="Name of an existing EC2 KeyPair to enable SSH access to "
                "the AWS EC2 instances",
    Type="AWS::EC2::KeyPair::KeyName",
    ConstraintDescription="must be the name of an existing EC2 KeyPair."
))

dokku_version = template.add_parameter(Parameter(
    "DokkuVersion",
    Description="Dokku version to install, e.g., \"v0.10.4\" (see "
                "https://github.com/dokku/dokku/releases).",
    Type="String",
    Default="v0.10.4",
))

dokku_web_config = template.add_parameter(Parameter(
    "DokkuWebConfig",
    Description="Whether or not to enable the Dokku web config "
                "(defaults to false for security reasons).",
    Type="String",
    AllowedValues=["true", "false"],
    Default="false",
))

dokku_vhost_enable = template.add_parameter(Parameter(
    "DokkuVhostEnable",
    Description="Whether or not to use vhost-based deployments "
                "(e.g., foo.domain.name).",
    Type="String",
    AllowedValues=["true", "false"],
    Default="true",
))

root_size = template.add_parameter(Parameter(
    "RootVolumeSize",
    Description="The size of the root volume (in GB).",
    Type="Number",
    Default="30",
))

ssh_cidr = template.add_parameter(Parameter(
    "SshCidr",
    Description="CIDR block from which to allow SSH access. Restrict "
                "this to your IP, if possible.",
    Type="String",
    Default="0.0.0.0/0",
))

Next, we create a mapping that allows us to look up the correct AMI for the latest Ubuntu 16.04 LTS release by AWS region:

template.add_mapping('RegionMap', {
    "ap-northeast-1": {"AMI": "ami-0417e362"},
    "ap-northeast-2": {"AMI": "ami-536ab33d"},
    "ap-south-1": {"AMI": "ami-df413bb0"},
    "ap-southeast-1": {"AMI": "ami-9f28b3fc"},
    "ap-southeast-2": {"AMI": "ami-bb1901d8"},
    "ca-central-1": {"AMI": "ami-a9c27ccd"},
    "eu-central-1": {"AMI": "ami-958128fa"},
    "eu-west-1": {"AMI": "ami-674cbc1e"},
    "eu-west-2": {"AMI": "ami-03998867"},
    "sa-east-1": {"AMI": "ami-a41869c8"},
    "us-east-1": {"AMI": "ami-1d4e7a66"},
    "us-east-2": {"AMI": "ami-dbbd9dbe"},
    "us-west-1": {"AMI": "ami-969ab1f6"},
    "us-west-2": {"AMI": "ami-8803e0f0"},
})

The AMIs can be located manually via https://cloud-images.ubuntu.com/locator/ec2/, or programmatically via the JSON-like data available at https://cloud-images.ubuntu.com/locator/ec2/releasesTable.

To allow us to access other resources (such as the S3 buckets and CloudWatch Logs group) created by AWS Web Stacks we also need to set up an IAM instance role and instance profile for our Dokku instance:

instance_role = iam.Role(
     "ContainerInstanceRole",
     template=template,
     AssumeRolePolicyDocument=dict(Statement=[dict(
         Effect="Allow",
         Principal=dict(Service=["ec2.amazonaws.com"]),
         Action=["sts:AssumeRole"],
     )]),
     Path="/",
     Policies=[
         assets_management_policy,  # defined in assets.py
         logging_policy,  # defined in logs.py
     ]
 )

 instance_profile = iam.InstanceProfile(
     "ContainerInstanceProfile",
     template=template,
     Path="/",
     Roles=[Ref(instance_role)],
 )

Next, let's set up a security group for our instance, so we can limit SSH access only to our IP(s) and open only ports 80 and 443 to the world:

security_group = template.add_resource(ec2.SecurityGroup(
    'SecurityGroup',
    GroupDescription='Allows SSH access from SshCidr and HTTP/HTTPS '
                     'access from anywhere.',
    VpcId=Ref(vpc),
    SecurityGroupIngress=[
        ec2.SecurityGroupRule(
            IpProtocol='tcp',
            FromPort=22,
            ToPort=22,
            CidrIp=Ref(ssh_cidr),
        ),
        ec2.SecurityGroupRule(
            IpProtocol='tcp',
            FromPort=80,
            ToPort=80,
            CidrIp='0.0.0.0/0',
        ),
        ec2.SecurityGroupRule(
            IpProtocol='tcp',
            FromPort=443,
            ToPort=443,
            CidrIp='0.0.0.0/0',
        ),
    ]
))

Since EC2 instances themselves are ephemeral, let's create an Elastic IP that we can keep assigned to our current Dokku instance, in the event the instance needs to be recreated for some reason:

eip = template.add_resource(ec2.EIP("Eip"))

Now for the EC2 instance itself. This resource makes up nearly half the template, so we'll take it section by section. The first part is relatively straightforward. We create the instance with the correct AMI for our region; the instance type, SSH public key, and root volume size configured in the stack parameters; and the security group, instance profile, and VPC subnet we defined elsewhere in the stack:

ec2_instance_name = 'Ec2Instance'
ec2_instance = template.add_resource(ec2.Instance(
    ec2_instance_name,
    ImageId=FindInMap("RegionMap", Ref("AWS::Region"), "AMI"),
    InstanceType=container_instance_type,
    KeyName=Ref(key_name),
    SecurityGroupIds=[Ref(security_group)],
    IamInstanceProfile=Ref(instance_profile),
    SubnetId=Ref(container_a_subnet),
    BlockDeviceMappings=[
        ec2.BlockDeviceMapping(
            DeviceName="/dev/sda1",
            Ebs=ec2.EBSBlockDevice(
                VolumeSize=Ref(root_size),
            )
        ),
    ],
    # ...
    Tags=Tags(
        Name=Ref("AWS::StackName"),
    ),
)

Next, we define a CreationPolicy that allows the instance to alert CloudFormation when it's finished installing Dokku:

ec2_instance = template.add_resource(ec2.Instance(
    # ...
    CreationPolicy=CreationPolicy(
        ResourceSignal=ResourceSignal(
            Timeout='PT10M',  # 10 minutes
        ),
    ),
    # ...
)

The UserData section defines a script that is run when the instance is initially created. This is the only time this script is run. In it, we install the CloudFormation helper scripts, execute a set of scripts that we define later, and signal to CloudFormation that the instance creation is finished:

ec2_instance = template.add_resource(ec2.Instance(
    # ...
    UserData=Base64(Join('', [
        '#!/bin/bash\n',
        # install cfn helper scripts
        'apt-get update\n',
        'apt-get -y install python-pip\n',
        'pip install https://s3.amazonaws.com/cloudformation-examples/'
        'aws-cfn-bootstrap-latest.tar.gz\n',
        'cp /usr/local/init/ubuntu/cfn-hup /etc/init.d/cfn-hup\n',
        'chmod +x /etc/init.d/cfn-hup\n',
        # don't start cfn-hup yet until we install cfn-hup.conf
        'update-rc.d cfn-hup defaults\n',
        # call our "on_first_boot" configset (defined below):
        'cfn-init --stack="', Ref('AWS::StackName'), '"',
        ' --region=', Ref('AWS::Region'),
        ' -r %s -c on_first_boot\n' % ec2_instance_name,
        # send the exit code from cfn-init to our CreationPolicy:
        'cfn-signal -e $? --stack="', Ref('AWS::StackName'), '"',
        ' --region=', Ref('AWS::Region'),
        ' --resource %s\n' % ec2_instance_name,
    ])),
    # ...
)

Finally, in the MetaData section, we define a set of cloud-init scripts that (a) install Dokku, (b) configure global Dokku environment variables with the environment variables based on our stack (e.g., DATABASE_URL, CACHE_URL, ELASTICSEARCH_ENDPOINT, etc.), (c) install some configuration files needed by the cfn-hup service, and (d) start the cfn-hup service:

ec2_instance = template.add_resource(ec2.Instance(
    # ...
    Metadata=cloudformation.Metadata(
        cloudformation.Init(
            cloudformation.InitConfigSets(
                on_first_boot=['install_dokku', 'set_dokku_env', 'start_cfn_hup'],
                on_metadata_update=['set_dokku_env'],
            ),
            install_dokku=cloudformation.InitConfig(
                commands={
                    '01_fetch': {
                        'command': Join('', [
                            'wget https://raw.githubusercontent.com/dokku/dokku/',
                            Ref(dokku_version),
                            '/bootstrap.sh',
                        ]),
                        'cwd': '~',
                    },
                    '02_install': {
                        'command': 'sudo -E bash bootstrap.sh',
                        'env': {
                            'DOKKU_TAG': Ref(dokku_version),
                            'DOKKU_VHOST_ENABLE': Ref(dokku_vhost_enable),
                            'DOKKU_WEB_CONFIG': Ref(dokku_web_config),
                            'DOKKU_HOSTNAME': domain_name,
                            # use the key configured by key_name
                            'DOKKU_KEY_FILE': '/home/ubuntu/.ssh/authorized_keys',
                            # should be the default, but be explicit just in case
                            'DOKKU_SKIP_KEY_FILE': 'false',
                        },
                        'cwd': '~',
                    },
                },
            ),
            set_dokku_env=cloudformation.InitConfig(
                commands={
                    '01_set_env': {
                        # redirect output to /dev/null so we don't write
                        # environment variables to log file
                        'command': 'dokku config:set --global {} >/dev/null'.format(
                            ' '.join(['=$'.join([k, k]) for k in dict(environment_variables).keys()]),
                        ),
                        'env': dict(environment_variables),
                    },
                },
            ),
            start_cfn_hup=cloudformation.InitConfig(
                commands={
                    '01_start': {
                        'command': 'service cfn-hup start',
                    },
                },
                files={
                    '/etc/cfn/cfn-hup.conf': {
                        'content': Join('', [
                            '[main]\n',
                            'stack=', Ref('AWS::StackName'), '\n',
                            'region=', Ref('AWS::Region'), '\n',
                            'umask=022\n',
                            'interval=1\n',  # check for changes every minute
                            'verbose=true\n',
                        ]),
                        'mode': '000400',
                        'owner': 'root',
                        'group': 'root',
                    },
                    '/etc/cfn/hooks.d/cfn-auto-reloader.conf': {
                        'content': Join('', [
                            # trigger the on_metadata_update configset on any
                            # changes to Ec2Instance metadata
                            '[cfn-auto-reloader-hook]\n',
                            'triggers=post.update\n',
                            'path=Resources.%s.Metadata\n' % ec2_instance_name,
                            'action=/usr/local/bin/cfn-init',
                            ' --stack=', Ref('AWS::StackName'),
                            ' --resource=%s' % ec2_instance_name,
                            ' --configsets=on_metadata_update',
                            ' --region=', Ref('AWS::Region'), '\n',
                            'runas=root\n',
                        ]),
                        'mode': '000400',
                        'owner': 'root',
                        'group': 'root',
                    },
                },
            ),
        ),
    ),
    # ...
)

The install_dokku and start_cfn_hup scripts are configured to run only the first time the instance boots, whereas the set_dokku_env script is configured to run any time any metadata associated with the EC2 instance changes.

Want to give it a try? Before creating a stack, you'll need to upload your SSH public key to the Key Pairs section of the AWS console so you can select it via the KeyName parameter. Click the Launch Stack button below to create your own stack on AWS. For help filling in the CloudFormation parameters, refer to the Specify Details section of the post on managing your AWS container infrastructure with Python. If you create a new account to try it out, or if your account is less than 12 months old and you're not already using free tier resources, the default instance types in the stack should fit within the free tier, and unneeded services can be disabled by selecting (none) for the instance type.

Dokku-No-NAT

Once the stack is set up, you can deploy to it as you would to any Dokku instance (or to Heroku proper):

ssh dokku@<your domain or IP> apps:create python-sample
git clone https://github.com/heroku/python-sample.git
cd python-sample
git remote add dokku dokku@<your domain or IP>:python-sample
git push dokku master

Alternatively, fork the aws-web-stacks repo on GitHub and adjust it to suit your needs. Contributions welcome.

Good luck and have fun!

Caktus GroupUser-Centered Navigation Design

Designing navigation that will support the needs of website users is one of the more important aspects of site usability. At Caktus we practice iterative, user-centered navigation design, which includes user feedback.

Identify Content Categories Through Card Sorting

Before devising a way for users to navigate content, it’s a good idea to make sure that the content is organized in a way that makes sense to them. Better yet, find out how users would categorize content. One way to do this is through card sorting.

There are three methods to carry out a card sorting study:

  • Open: Make a list of labels representing pieces of content and let users create and name their own categories to organize the labels.

An example of open card sorting. Screen capture of an open card sorting interface in OptimalWorkshop.com

  • Closed: Provide users with a list of categories along with the labels representing the pieces of your content, and allow them to categorize the content labels into the provided categories.

An example of closed card sorting. Screen capture of a closed card sorting interface in OptimalWorkshop.com

  • Hybrid: Provide a set of predetermined categories and allow users to create their own categories. Let them organize content labels into the predetermined categories and/or their own categories.

An example of hybrid card sorting. Screen capture of a hybrid card sorting interface in OptimalWorkshop.com

A card sorting study will reveal how users think about categorizing content. It can be conducted with index cards or sticky notes, or with a digital tool. Advantages that digital tools offer include the ability to conduct remote studies and a quick analysis of results. For example, they can display the results of a card sort as a matrix showing what percentage of users placed which piece of content into which category. At Caktus, we use OptimalWorkshop for card sorting, as well as for treejack and first-click testing (described below).

Pro tip: If, prior to card sorting, you have already established guidelines for controlled vocabulary on your website, a closed card sorting may be a good choice. If you are still deciding on terminology, learning the words your audience uses to describe your content in an open card sorting study may help to provide invaluable insights.

A popular placements matrix showing the results of a card sorting exercise. Screen capture of Popular Placements Matrix in OptimalWorkshop.com

Validate the Content Organization Through Treejack Testing

Once you come up with the first iteration of content organization, it is a good idea to validate that content structure through treejack testing (also known as “reverse card sorting”).

In treejack, you build a tree-like structure out of labels representing content. A treejack consists of nested levels of content labels that mimic your intended information architecture. During the test, users are asked to find specific pieces of content within that tree.

An example of a treejack test. Screen capture of a treejack interface in OptimalWorkshop.com

If the treejack is based on results from a card sorting study, you might expect that users find content labels exactly where you put them. Let go of that expectation. No classification of content you may come up with, even with the help of users, is going to be perfect. It’s more useful to think of treejack as another opportunity to refine your content organization.

Pro tip: What if the results of a treejack test contradict the results of a card sorting study? That may happen, especially if both studies are qualitative, meaning that they rely on a small sample group of users. That means your job is not done, and you should continue to tweak and test.

Continue with First-Click Testing

When searching for content on a live website users rely on a number of cues offered by good design. Those cues are absent in treejack testing and that may be a factor preventing users from being successful. Continue testing content organization by giving the users additional context. Asking users where they would click within a static mockup to find a specific piece of content can offer insights into users’ mental models. This may help resolve any ambiguities between card sorting and treejack testing.

Pro tip: When coming up with tasks for a first-click test, avoid using words that are present in links and buttons in the interface design that you are testing. For example, if you are testing a “Contact Us” button, don’t ask the user, “Where would you click to contact this company?” Instead, ask, “Where would you click to get in touch with this company?” Also, avoid asking leading questions. For example, instead of asking, “Would you look for squash under vegetables or under meat?” ask, “Where would you click to find squash?

An example of first click testing, asking where a user would click to learn what Caktus Group does. Screen capture of a first click testing interface in OptimalWorkshop.com

Conduct Usability Testing on a Live Interface

By the time a design is translated into code, it should have iterated on the organization of content and the navigation pattern based on results from card sorting, treejack, and first-click testing. Now a live interface can be tested, which adds a new dimension that may facilitate or hamper users’ ability to navigate. Usability testing on a live interface is a chance to find out how your design decisions hold up.

A usability test for a to-do app. Image source: Validately.com

Pro tip: If previous user tests left you with unanswered questions about content categorization, begin by focusing on tasks that will help you resolve those questions. Use the same or similar tasks to those you gave users during the first-click testing. Pay attention not only to what users do, but also to what they say in order to understand the mental models that guide their interactions with the interface.

Takeaways

The process of organizing content and identifying navigation patterns that will support user goals is messy (learn more about how to make sense of any mess from Abby Covert). There is no perfect solution. The best option is to identify common mental models and patterns, and find your content structure and navigation pattern in that knowledge. The tricky part in qualitative studies is to figure out what is a quirk and what is a pattern. Repetitive testing with a small sample group of users is a good way to come closer to the answers you seek.

Interested in learning more about UX? We have posts about product discovery, the principles of good UX design, user story mapping, and more.

Caktus GroupEliciting Helpful Stakeholder Feedback

Client feedback is integral to the success of a project and as a product owner, obtaining it is part of your responsibility. Good feedback is not synonymous with positive or negative feedback. A client should feel empowered and comfortable enough to speak up when something isn’t right. If they wait to share their honest thoughts, there is a high chance it will cost more time and money to fix down the road.

Below are some suggestions to elicit better feedback from your clients. Here at Caktus we present our work in sprint reviews, but these tips can be applied anytime you are presenting work and require client feedback:

  • Start your presentation by being extremely clear on the goals of the meeting. Let the client know that the entire purpose is to get their feedback on the stories/features your team is presenting. Remind them that they will not hurt anyone’s feelings if they tell you what is not working for them (they should, of course, provide details and not just a generic, “This is terrible and wrong” comment).

  • Share only the stories or features that will elicit feedback. There is often work done in a sprint that does not have any user-visible components (e.g., technical debt). Feel free to let the client know that those items have been accomplished if you think it is relevant for them to know (after all, it was work that the team accomplished). However, spend the majority of the time sharing features that they can see and understand, and that do require their feedback.

  • Tell a story. Go through the completed work as a sequence of events as the user would experience them. Be careful not to just review the work ticket by ticket, but as a holistic version of how the overall feature works.

  • Show the functionality from the customer’s perspective, not from the code level.

  • Use real data, or at least data that makes sense. Populating the application with lorem ipsum or some other random dummy data will make it difficult for you to present the app in a way that makes sense for the client. For example, if you are creating an app for booking flights, you will want the data to reflect that (cities, times, airlines, dates), even if it is just placeholder data.

  • Ask your developer to tell the client why they developed a feature in the way that they did, how it benefits the user, and what kind of feedback is needed. “For this feature, we made it work this way because ABC. Does this accomplish what the user needs to do? What are we missing?”

  • Coach the client on the specific types of feedback that would be most helpful. “Here we are looking for specific feedback regarding the navigation,” or, “For this feature, how close is this layout to what you had envisioned?”

  • Ask for the feedback in an open-ended manner versus questions answerable with yes/no. “How do you envision the user would utilize this feature? In what ways might it be confusing for them? What might it need to do that it is currently not doing?”

  • Try to make the sprint review compelling, relevant to the audience, and at an appropriate technical level. This will help you keep people’s attention and ensure they are engaged enough to give the feedback you need.

Guiding your client in a way that helps them articulate and communicate what is working and what is not will help to ensure that you are building the product they want. Getting the feedback as early as possible helps the team do this within the time and budget allotted. Good feedback will lead to a good product.

Find more project management tips in this post about being a product owner in a client services organization.

Caktus GroupThe Importance of Developer Communities

Go to any major city and you will be able to find a user group for just about every major, modern programming language. Developers meet in their off hours to discuss what’s new in their language of choice, challenges they’ve encountered, and different ways of doing things. If you’ve never been to one of these groups, it might be easy to brush them off as an unimportant outlet where people talk in way too much detail about a geeky interest. Instead, most of the attendees are professionals who are looking to build skills and find new ways to solve problems.

Why do we sacrifice our personal time to discuss the things we do all day at work? Simply put, it makes us better programmers. When I attend a meetup or talk, even on topics which only have a small overlap with the code I write on a day-to-day basis, I always learn something new. I don’t need to jot it down or record it; I wouldn’t ever think to go back and reference those notes anyway. But weeks, months, or sometimes even years later, when I come across a hurdle which requires a creative solution, something may nag me from the back of my mind. “Remember you once heard a talk about _?” it asks. “Maybe a solution like that one would help here?” To Google I go, with a topic in mind that gives me a jumping-off point.

Having diverse little nuggets floating around in the recesses of my memory gives me a large bank of ideas to draw from when I need a solution I may not have used before. The memory may not even be directly applicable, but you’d be surprised how often there are parallel solutions in wildly different areas. These ideas don’t always pan out, but they get me past that coder’s block often enough to be very much worth the investment of my time.

Besides the benefits for an individual coder though, user groups for a programming language or industry can help the broader community of developers in a number of ways. Most transparently, these groups provide an obvious place for a new developer to meet people, learn more about their language(s) of choice, and get advice on how to gain the necessary skills to accomplish their goals. But there’s a much more subtle benefit going on here as well. By attending these groups and interacting with people who may not otherwise cross paths, even the most experienced coders can have their rigid ideas challenged and break out of restrictive thinking.

Every workplace settles into a culture, in which certain ideas and techniques are considered to be “best practices,” often for very good reason. It is all too easy for these best practices to get calcified into “the way we’ve always done things,” and we all have stories about the perils of that line of thought. Developer communities, by providing a platform for developers to interact with a diversity of people and in less-structured contexts, allow them to break out of their workplace culture and have those ideas challenged. Sometimes, a developer will address the challenge and come out even more certain that their preferred methodology is really best, and other times they’ll come out thinking that there may be cases in which a different approach is a better solution. But no matter what, they will have had an opportunity to think through a practice that they’ve been executing for months or years without examining.

On occasion, revolutionary ideas come out of these groups, and gradually percolate through wider communities, but every single meeting contains a benefit for someone present. For these reasons and more, Caktus is proud to offer a space for these groups to meet, and to support the wider community. By welcoming discussion and learning into our Tech Space, we hope to encourage growth in individuals and the community, and challenge some of our own calcified ideas while we’re at it.

Caktus GroupCommuter Benefits and Encouraging Sustainable Commuting

Growth for Durham has meant a lot of great things for Caktus, from an expanding pool of tech talent to an increased interest in civic-minded tech solutions to shape the evolving community. This growth has also brought logistical challenges. Most recently, this meant providing adequate commuter support to our employees in a city whose transportation infrastructure is still nascent.

With limited available parking and an ever growing staff, we were unsure how best to tackle this problem. Rather than find additional parking where it didn’t yet exist, we began instead to investigate how we could potentially change commuting culture itself and create a more sustainable pathway for continued growth.

After careful examination and research, Caktus decided to vastly expand our commuter benefits. Beyond simply offering subsidized parking to our employees, starting in September of 2017, Cakti will have a range of benefits to choose from. For those employees who opt out of their parking benefit, they can choose instead to receive stipends for expenses related to biking or pre-tax contributions to help cover the cost of public transportation.

As part of this expansion, Caktus has also opted to build ties with local businesses and programs that offer additional perks to green commuters. Employees who choose to bike to work will become automatic members of Bicycle Benefits, an independent group that works with local businesses to offer perks and discounts to local bikers. We’ve also partnered with GoTriangle, the public face of the Research Triangle Public Transportation Authority, and their Emergency Ride Home and GoPerks programs to offer further aid, perks, and rewards to employees who choose greener commuting options.

By offering commuter benefits along with additional perks and rewards for green commuting, we hope to transition a number of our staff to greener modes of transportation. Not only will this provide a more sustainable growth plan in Durham’s increasingly urban environment, but it also encourages us to live up to what we value most as a company. We strive to do what’s best for the community, whether that be the thing that supports our employees or the thing that supports a local call for sustainable commuting. We hope this will be another step in that direction.

Interested in working for Caktus? Head to our Careers page to view our open positions.

Caktus GroupCaktus 10th Anniversary Event

Caktus turned ten this year and we recently celebrated with a party at our office in Durham, NC. We wouldn’t be where we are today without our employees, clients, family, and friends, so this wasn’t just a celebration of Caktus. It was a celebration of the relationships the company is built on.

Caktus party guests having a good time.

The last five years

Since our last milestone birthday celebration five years ago, Caktus has more than doubled in size, growing from 15 employees to 30-plus. Co-founder and CTO Colin Copeland was honored with the Triangle Business Journal’s 40 Under 40 award. The company itself moved from Carrboro to the historic building we now own in downtown Durham, where we’re pleased to be able to host local tech meetups when we’re not using it for our own special events.

Guests at the Caktus 10th anniversary party listening to a speech in the Tech Space.

In our work, we’ve continued our mission to use technology for good, building the world’s first SMS-based voter registration system, beginning the Epic Allies project to improve outcomes for young men with HIV/AIDS, and launching the Open Data Policing website for tracking police stop data.

Celebrations

Co-founder and CEO Tobias McNulty gave a speech to mark the occasion, sharing a view of how far the company has come. There was also enthusiasm for what we can achieve at Caktus in the next ten years - and those to come after.

Caktus co-founders Tobias McNulty and Colin Copeland.

As part of the celebrations we had food and birthday cupcakes, as well as prize giveaways for our team. Family and friends of Caktus employees joined in on the fun and games.

Caktus employees playing a game at the 10th-anniversary party.

We welcomed several clients as well, and we thank them along with all of those who have worked with us for giving us the opportunity to create meaningful tools that help people. A number of our clients have been with us for years, and we’re proud to have such a good relationship with those who trust us to build solutions for them.

Looking forward

The communities we’re a part of and the individuals in those communities have always been central to our focus. Growing sharp web apps is what we do, but it’s the people who build them and those we build them for that matter. With that in mind, we look forward to continuing to develop our internal initiatives around diverse representation, transparency and fair pay. We are also dedicated to continuing support of the various communities we are a part of, whether technical or geographic, through our charitable giving initiatives, conference and meetup sponsorships, open source contributions, and requiring a code of conduct to be in place and enforced at events we sponsor or attend.

Supportive, inclusive, and welcoming communities helped Caktus grow to where we are today, and we’re honored to be in a position to give back as we celebrate our tenth anniversary.

Credit for all photos: Madeline Gray.

Caktus GroupFalse Peaks and Temporary Code

In the day-to-day work of building new software and maintaining old software, we can easily lose sight of the bigger picture. I think we can find perspective when we step back and walk through the evolution of a single piece of software.

For example: first, you are asked for a simple slideshow to showcase a few images handed to you. Just five images and the images won't change.

An easy request! It only takes you a short time to build with some simple jQuery. You show the client, they approve it. You deploy it to production and call it a day.

This example, and all the examples in this blog post, are interactive. Try it out!

The next week, your client comes back with a new request. They don't think the users notice the slideshow can be navigated. They ask for previews of the next and last image, to use for navigation:

https://blog-post-false-peaks.caktustest.net/images/wireframe_ex.png

So you jump in. It’s an easy enough addition to the pretty simple slideshow widget you've already built. You slap in two images, position them just so, and add a few more lines of jQuery to bind navigation to them. Once again, it’s a quick review with the client and you ship it off to production.

But the next day, there's a bug report from a user. The slideshow works, the thumbnails show the right image, and the new previous/next preview images navigate correctly. However, the features don't work together, because the thumbnail navigation doesn’t change the new left and right preview images you added.

The client also wants the new navigation to act like a carousel and animate.

Now they want to add more photos.

And they want to be able to add, reorder, and change the photos on their own. That will break all the assumptions you made based on a fixed number of photos.

Every step along the way you added one more layer. One small request at a time, the overall complexity increased as those small requests added up. You took each step with the most efficient option you could find, but somehow you ended up with costly bloat. How does this keep happening?

https://blog-post-false-peaks.caktustest.net/images/false-peaks.jpeg

At each step, you took the next best step. Ultimately, this didn't take you where you needed to go.

We don't subscribe to waterfall development practices at Caktus. Agile is a good choice, but as we work through iterations how do we bridge across those sprints to get a larger picture of our path, and how do we make those larger decisions about technical debt and decisions that impact on a larger scale than a single sprint?

Some of the code we write is there to get us somewhere else. Maybe you need to build a prototype to understand the original problem before you can tackle it with a final solution. Sometimes you need to stub out one area of code because your current task is another focus, and you'll come back to finish it or replace it in a future ticket. Many disciplines have these kinds of temporary artifacts, from the scaffolding built by construction crews to sketches artists make before paintings.

Maybe it is harder for software developers because we often don't know what code is temporary ahead of time. A construction crew would never say, "Now that we've built the roof, we really don't need those walls anymore!" but this is what it can often feel like when we refactor or tear down pieces of a project we worked hard on, even if it really is for the best.

My suggestion: become comfortable with tearing down code as part of the iterative process! Extreme Programming calls this rule "Relentlessly Refactor".

We need to think about some of the features we implement as prototypes, even if they've been shipped to end users. We won't always know when new code or features are stop-gaps or prototypes when we're building them, but we may realize later they had been so all along when more information comes to light about where those features need to go next.

Falling into the trap of thinking the work done in sprints is inherently additive is common, but destructive.

If each sprint is about "adding value", we tend to develop a bias towards the addition of artifacts. We consider modification to happen as part of bug fixes, seeing it as correcting a mistake in earlier requirements or code, or as changes stemming from evolving or misunderstood requirements. We may hold a bias against removing artifacts previously added, either within a given sprint or in a later sprint.

Going back to the construction analogy, when you construct a building you create a lot of things that don't end up in the final construction. You build scaffolding for your workers to stand on while the walls are being built up. You build wooden forms to shape concrete, removing them when the foundations and structures are solid.

You fill the building-in-progress with temporary lighting and wiring for equipment until the project is near completion and the permanent electrical and plumbing are hooked up and usable. A construction crew creates a lot of temporary artifacts in the path to creating a permanent structure, and we can learn from this when building software iteratively. We're going to have cases where work that supports what we're completing this sprint isn't necessary or may even be a hindrance in a future sprint. Not everything is permanent, and removing those temporary artifacts isn't a step backward or correcting a mistake. It is just a form of progress.

Jeff TrawickUpgrading from python-social-auth 0.2.19 to social-auth-core 1.4.0 + social-auth-app-django 1.2.0


I had a few issues with this many moons ago when I was trying the initial social-auth-core packaging. Yesterday I was able to get it to work with the latest version, which in turn allowed me to move from Django 1.10 to Django 1.11.
You will most likely encounter failed Django migrations when making the switch. Some posts on the 'net recommend first upgrading to an intermediate version of python-social-auth to resolve that, but I wanted a simpler production switchover, which I found in this social-app-django ticket. The eventual production deploy solution after testing locally with a copy of the production database was:
  1. Temporarily hack my Ansible deploy script to fail after updating the source tree and virtualenv for the new libraries but before running migrations.
  2. On the server, as the project user, run pip uninstall python-social-auth to delete the old package.
  3. On the server, make another copy of the production database and then run update django_migrations set app='social_django' where app='default'; via psql.
  4. On the server, as the project user, run python manage.py migrate social_django 0001 --fake.
  5. Remove the temporary fail from my Ansible deploy script.
  6. Run the deploy again, which will run the remaining migrations.

Caktus GroupQuick Tips: How to Change Your Name in JIRA

In May 2017, Atlassian rolled out the new Atlassian ID feature, which gives Atlassian product users a central Atlassian account that holds their user details. When this change occurred, our integration with G Suite combined with the Atlassian ID feature to result in some users with strange display names in JIRA, which I (as the JIRA admin) can’t fix since users now control their own profile. However, they don’t control their profile through JIRA. So, how does one change the names that display in JIRA for their users? (Hint: you can’t do it through User Management.)

Step 1. Go to https://id.atlassian.com/profile/profile.action and log in.

JIRA account settings page

Step 2. Enter your desired display name in the field labeled Full Name.

JIRA account settings with a name change.

Step 3. Click Save.

Step 4. Return to your JIRA instance. If your name has not updated, log out and then back in again.

Step 5. Revel in your new name.

Setting a new JIRA name.

Want more JIRA? Read up on how we made use of the priority field to ease team anxiety.

Caktus GroupTips for Product Ownership and Project Management in a Client Services Organization

Looking for some pointers to improve my own client management skills, I scoured the internet to find practical ideas on how to handle challenges better as the product owner (PO) in a client-services organization. I came up completely short.

Using Scrum in a client-services organization comes with its own unique challenges. As the product owner (PO), you are technically the project’s key stakeholder (along with many other responsibilities nicely outlined here). However, when serving as a PO with external clients you hold the responsibility, but not always the power, to make the final decisions on the priorities and final features of the product. Clients sometimes have an idea of what they want, but it may run counter to what you and your team recommend based on your experiences (which is why the client hired you in the first place! It is okay to offer alternatives to their requests, as long as you can back it up with facts). Ultimately the client makes the final decision, but it is our job to give them our best recommendations.

Some companies designate the client as the PO, with all of the responsibilities that go along with that. This approach is often not feasible at Caktus since our clients are off-site, not part of the Scrum team, and have many other external responsibilities that do not involve our project. The client is the subject expert, but not necessarily well-versed enough in Scrum or software development to have the skill set to be a good PO at a technical level.

Here are some tips that I think are helpful for working with non-technical, external clients when using Scrum.

Set and reinforce expectations

You can explain Scrum in detail and give real-world situations to help build an understanding of what it entails, but until a person works within that framework, their full grasp of it will be limited. If your client is working in a less technical environment, it is likely Scrum is new to them. Use any opportunity (discovery phase, Sprint Zero, every review and relevant communication) as an opportunity to underscore what you need from them as a client to help make this project successful. At Caktus, Scrum represents uncharted territory for many of our customers, but the process works because we treat each project as a learning opportunity, incrementally reinforcing the process and improving the agility of our partnership throughout the project.

Be transparent, but take into account the client’s background

In the name of transparency, we always offer clients full access to our ticket tracker and product backlog, a detailed release plan for the most valuable features listing out all the tickets we believe we can achieve within the sprints, a breakdown of and calendar invites for all the sprint activities for the team, and how the activities relate to their particular project (i.e., in backlog grooming we do ABC, in sprint planning we do XYZ, etc.).

Too much information, however, can be paralyzing. Get to know your client (how technical they are, how much time they have to be involved in the project, etc.) before deciding what information will be most helpful for them. The whole point is to create a product that delights the client, and make the process of getting there as smooth and easy as possible.

A client with limited technical knowledge may find digging through a product backlog requires more time than they have. Instead, you can give them consistent updates in other formats, even something as simple as a bulleted list. For example: “These are the tickets we are going to estimate in backlog grooming on Tuesday. Please review the user stories and the Acceptance Criteria (AC) to ensure it aligns with what you feel is important for this feature.” At Caktus, we typically take on the day-to-day management of the product backlog, based on our understanding of the project and the relative priorities communicated to us by our clients. For some clients this can take the place of having full access to everything, which at times serves more to overwhelm than to inform.

Similarly, the release plan should be built around certain features rather than specific tickets. Since a release plan is a best guess based on the initial estimates of the team and is constantly being adjusted, including features to be completed rather than specific tickets gives the team the means to focus attention on meeting the overarching project goals. Hewing to the release plan is not always possible, but when you can do it, it makes things less stressful for your client.

(Over) Communicate

There is a lot to accomplish in a sprint review meeting. You need to talk about what was accomplished, share it with the client, discuss their feedback on the completed work, talk about priorities for the upcoming sprint, and then possibly make adjustments based on the feedback that came out of the review. To help take the pressure off the client to review everything, give feedback, and think about next steps in a one-hour meeting, let clients know when features are ready for review on staging, in advance of the sprint review. That way they have ample time to play around with the features. By the time sprint review comes, they have a solid understanding of progress and we can use the sprint review to walk through specific feedback.

We recommend writing up your upcoming sprint goals as early as you can and sharing them ahead of time. It's important to note that these are only the goals, and that the team decides what they pull into the sprint. Then, after sprint planning, keep the client updated on which features your team was able to pull into the sprint so their expectations are set appropriately.

If you need something from a client, just ask. Explaining dependencies also helps (i.e., adjusting this feature too far down the road will be more expensive than fixing it now, so please give us feedback by X date so we can address it soon). Throughout my four-plus years at Caktus, I've found that technical expertise is only half the battle, and our most successful projects are those in which we stay in constant communication in with the client.

Compromise when it makes sense for the client and for your team

Some clients are not comfortable using or navigating the tools we use every day. Therefore, if it helps a client to, for example, download ticket details from JIRA into an Excel spreadsheet formatted in a way that allows them to understand something better, it is worth the extra time and effort. However, keep in mind the overall balance of time and effort. If they ask you to keep a shared spreadsheet updated in real time with all updates in JIRA, help them understand why that might not be a good idea, and come up with some alternative solutions to get them what they need.

Conclusion

Much of what is out there on the internet related to project ownership is related to being a PO at a software company, with internal stakeholders. Having external clients doesn’t make Scrum impossible; it just makes it a little bit more challenging, and requires some tweaking to keep your client - and your team - happy!

Caktus GroupAdvanced Django File Handling

Advanced Django File Handling

Modern Django's file handling capabilities go well beyond what's covered in the tutorial. By customizing the handlers that Django uses, you can do things pretty much any way you want.

Static versus media files

Django divides the files your web site is serving unchanged (as opposed to content delivered by your Django views) into two types.

  • "Static" files are files provided by you, the website developer. For example, these could be JavaScript and CSS files, HTML files for static pages, image and font files used to make your pages look nicer, sample files for users to download, etc. Static files are often stored in your version control system alongside your code.
  • "Media" files are files provided by users of the site, uploaded to and stored by the site, and possibly later served to site users. These can include uploaded pictures, avatars, user files, etc. These files don't exist until users start using the site.

Two jobs of a Django storage class

Both kinds of files are managed by code in Django storage classes. By configuring Django to use different storage classes, you can change how the files are managed.

A storage class has two jobs:

  • Accept a name and a blob of data from Django and store the data under that name.
  • Accept a name of a previously-stored blob of data, and return a URL that when accessed will return that blob of data.

The beauty of this system is that our static and media files don't even need to be stored as files. As long as the storage class can do those two things, it'll all work.

Runserver

Given all this, you'd naturally conclude that if you've changed STATICFILES_STORAGE and DEFAULT_FILE_STORAGE to storage classes that don't look at the STATIC_URL, STATIC_ROOT, MEDIA_URL, and MEDIA_ROOT settings, you don't have to set those at all.

However, if you remove them from your settings, and try to use runserver, you'll get errors. It turns out that when running with runserver, django.contrib.staticfiles.storage.StaticFilesStorage is not the only code that looks at STATIC_URL, STATIC_ROOT, MEDIA_URL, and MEDIA_ROOT.

This is rarely a problem in practice. runserver should only be used for local development, and when working locally, you'll most likely just use the default storage classes for simplicity, so you'll be configuring those settings anyway. And if you want to run locally in the exact same way as your deployed site, possibly using other storage classes, then you should be running Django the same way you do when deployed as well, and not using runserver.

But you might run into this in weird cases, or just be curious. Here's what's going on.

When staticfiles is installed, it provides its own version of the runserver command that arranges to serve static files for URLs that start with STATIC_URL, looking for those files under STATIC_ROOT. (In other words, it's bypassing the static files storage class.) Therefore, STATIC_URL and STATIC_ROOT need to be valid if you need that to work. Also, when initialized, it does some sanity checks on all four variables (STATIC_URL, STATIC_ROOT, MEDIA_URL, and MEDIA_ROOT), and the checks assume those variables' standard roles, even if the file storage classes have been changed in STATICFILES_STORAGE and/or DEFAULT_FILE_STORAGE.

If you really need to use runserver with some other static file storage class, you can either configure those four settings to something that'll make runserver happy, or use the --nostatic option with runserver to tell it not to try to serve static files, and then it won't look at those settings at startup.

Using media files in Django

Media files are typically managed in Python using FileField and ImageField fields on models. As far as your database is concerned, these are just char columns storing relative paths, but the fields wrap that with code to use the media file storage class.

In a template, you use the url attribute on the file or image field to get a URL for the underlying file.

For example, if user.avatar is an ImageField on your user model, then

<img src="{{ user.avatar.url }}">

would embed the user's avatar image in the web page.

The default storage class for media, django.core.files.storage.FileSystemStorage, saves files to a path inside the local directory named by MEDIA_ROOT, under a subdirectory named by the field's upload_to value. When the file's url attribute is accessed, it returns the value of MEDIA_URL, prepended to the file's path inside MEDIA_ROOT.

An example might help. Suppose we have these settings:

MEDIA_ROOT = '/var/media/'
MEDIA_URL = '/media/'

and this is part of our user model:

avatar = models.ImageField(upload_to='avatars')

When a user uploads an avatar image, it might be saved as /var/media/avatars/12345.png. That's MEDIA_ROOT, plus the value of upload_to for this field, plus a filename (which is typically the filename provided by the upload, but not always).

Then <img src="{{ user.avatar.url }}"> would expand to <img src="/media/avatars/12345.png">. That's MEDIA_URL plus upload_to plus the filename.

Now suppose we've changed DEFAULT_FILE_STORAGE to some other storage class. Maybe the storage class saves the media files as attachments to email messages on an IMAP server - Django doesn't care.

When 12345.png is uploaded to our ImageField, Django asks the storage class to save the contents as avatars/12345.png. If there's already something stored under that name, Django will change the name to come up with something unique. Django stores the resulting filename in the database field. And that's all Django cares about.

Now, what happens when we put <img src="{{ user.avatar.url }}"> in our template? Django will retrieve the filename from the database field, pass that filename (maybe avatars/12345.png) to the storage class, and ask it to return a URL that, when the user's browser requests it, will return the contents of avatars/12345.png. Django doesn't know what that URL will be, and doesn't have to.

For more on what happens between the user submitting a form with attached files and Django passing bits to a storage class to be saved, you can read the Django docs about File Uploads.

Using Static Files in Django

Remember that static file handling is controlled by the class specified in the settings STATICFILES_STORAGE.

Media files are loaded into storage when users upload files. Static files are provided by us, the website developers, and so they can be loaded into storage beforehand.

The collectstatic management command finds all your static files, and saves each one, using the path relative to the static directory where it was found, into the static files storage. [2]

By default, collectstatic looks for all the files inside static directories in the apps in INSTALLED_APPS, but where it looks is configurable - see the collectstatic docs.

So if you have a file myapp/static/js/stuff.js, collectstatic will find it when it looks in myapp/static, and save it in static files storage as js/stuff.js.

You would most commonly access static files from templates, by loading the static templatetags library and using the static template tag. For our example, you'd ask Django to give you the URL where the user's browser can access js/stuff.js by using {% static 'js/stuff.js' %} in your template. For example, you might write:

{% load 'static' %}
<script src="{% static 'js/stuff.js' %}"></script>

If you're using the default storage class and STATIC_URL is set to http://example.com/, then that would result in:

<script src="http://example.com/js/stuff.js"></script>

Maybe then you deploy it, and are using some fancy storage class that knows how to use a CDN, resulting in:

<script src="http://23487234.niftycdn.com/239487/230498234/js/stuff.js"></script>

Other neat tricks can be played here. A storage class could minimize your CSS and JavaScript, compile your LESS or SASS files to CSS, and so forth, and then provide a URL that refers to the optimized version of the static file rather than the one originally saved. That's the basis for useful packages like django-pipeline.

[2]collectstatic uses some optimizations to try to avoid copying files unnecessarily, like seeing if the file already exists in storage and comparing timestamps to the origin static file, but that's not relevant here.

If you’re looking for more Django tips, we have plenty on our blog.

Caktus GroupDjangoCon 2017 Recap

Mid-August brought travel to Spokane for several Caktus staff members attending DjangoCon 2017. As a Django shop, we were proud to sponsor and attend the event for the eighth year.

Meeting and Greeting

We always look forward to booth time as an opportunity to catch up with fellow Djangonauts and make new connections. Caktus was represented by a team of six this year: Charlotte M, Karen, Mark, Julie, Tobias, and Whitney. We also had new swag and a GoPro Session to give away. Our lucky winner was Vicky. Congratulations!

Winner of our DjangoCon 2017 prize giveaway.

This year we also had a special giveaway: one free ticket to the conference, donated to DjangoGirls Spokane. The winner, Toya, attended DjangoCon for the first time. We hope she had fun!

Top Talks

Our technical staff learned a lot from attending the other talks presented during the conference. Their favorite talks included the keynote by Alicia Carr, The Denormalized Query Engine Design Pattern, and The Power and Responsibility of Unicode Adoption.

Charlotte delivered a well-received talk about writing an API for almost anything. We’ll add the video to this post as soon as it’s available in case you missed it.

Another excellent talk series presented at DjangoCon!

See You Next Time

As always, we had a great time at DjangoCon and extend our sincere thanks to the organizers, volunteers, staff, presenters, and attendees. It wouldn’t be the same conference without you, and we look forward to seeing you at next year’s event.

Caktus GroupLetting Go of JIRA: One Team's Experiment With a Physical Sprint Board

At Caktus, each team works on multiple client-service projects at once, and it’s sometimes challenging to adapt different clients’ various tools and workflows into a single Scrum team’s process.

One of our Scrum teams struggled with their digital issue tracker; we use JIRA to track most of our projects, including the all-important sprint board feature. However, one client used their own custom issue tracker, and it was not feasible to transfer everything to our JIRA instance. A challenge then arose: how do we visualize the work we are doing for this project on our own sprint board?

We stick with JIRA

Since the tasks were already tracked in the client’s tracker, we did not want to duplicate that effort in JIRA, and we were unable to find an existing app to integrate the two trackers so that the data would sync both ways. But we still wanted the work to be represented in our sprints since it took up a significant portion of the team’s time.

Initially, we included placeholder JIRA tickets in our sprint for each person who would work on this project. Those tickets were assigned story points relative to the time that person was planning to spend on it. Essentially, among our other projects’ tasks and stories, we also had distinct blocks of hours to represent the work being done on this separate project.

This solution started to cause some confusion when the team tried to relate story points directly to hours, and it didn’t add any real value since the placeholder tickets lacked any specificity, so we decided to stop using them altogether. As a result, this project was not represented at all on our sprint board or in our velocity, and we did not have a good overall picture of our sprint work. This hindered our transparency and visibility into the team’s workload, and hurt our ability to allocate time across projects effectively (take a look at this post to see how we do that using tokens!).

We transition to a low-tech solution

Eventually, the team left JIRA behind and started using a physical whiteboard in the team room to visualize sprint work. The board allowed us to include tickets from our tracker and our client’s tracker in one central location.

A physical task board at Caktus.

We divided the board into the same columns that were on our JIRA sprint board to represent ticket status: To Do, In Progress, Pull Request, On Staging, Blocked, and Done. We use sticky notes to represent each user story, task, or bug, color-coded by project. Each sticky contains a ticket number that maps to the ticket in one of the trackers, a short title or description, and a story point estimate. We also started tracking sprint burndown and team velocity on large sticky sheets, also posted on the walls of the team room.

A physical sprint burndown chart at Caktus. A physical sprint burndown chart.

A physical team velocity chart at Caktus. A physical team velocity chart.

The physical board evolves

Including distinct tickets from the project in our sprints highlighted another challenge: the project’s priorities were determined by the client instead of by the team’s Product Owner, and the client did not use Scrum. This meant that the client changed the current priorities more frequently than our two-week sprint cadence, and the nature of the project was such that we had to keep up.

The team pointed out that we could not commit to finishing a specific set of tasks for that project since priorities at the beginning of the sprint were not fixed for the following two weeks (which is essential for carrying out a sprint effectively, as it allows the team to stay focused on a stable goal instead of having to shift gears often).

We decided that the best way to handle uncertain priorities was to divide the whiteboard into horizontal rows (or swimlanes), each with its own rules and expectations:

  • One swimlane for sprint work that we commit to finishing, and whose priorities do not change within the sprint.
  • A second swimlane for work that we want to make progress on but cannot commit to finishing in the sprint (mostly due to external dependencies).
  • A third swimlane for work that we have no control over, such as projects where priorities are not stable enough for two-week sprints, and the release day does not align with the end of our sprint. This swimlane uses more of a Kanban workflow, minus the work in progress limits.

All of the team’s projects are now represented with tickets that map to distinct user stories, tasks, and bugs in one central place, giving the team full visibility into the work being done during the sprint, without committing to work that is likely to fall in priority.

Where we are now

The team continues to work out the kinks of using a physical board, such as overlooking details that are included only in the issue trackers, needing to be physically in the team room to know what to work on next, updating tickets only once a day during standup, and sticky notes falling off the board when the room gets too hot.

We have also observed some distinct benefits to leaving JIRA behind:

  • We can easily include new projects that use any issue tracker into our physical sprint board;
  • The team is fully engaged with the physical artifacts and actively drive standups and sprint planning together, as opposed to having one person operate JIRA while everyone else watches;
  • The team enjoys moving the sticky notes along the board, and take satisfaction in updating the burndown chart (especially when it gets down to zero!);
  • They feel more freedom to experiment with the board, knowing that the possibilities are only limited by their imagination rather than the capabilities of the software.

I don’t know if the team will continue to use the whiteboard, if they will choose to go back to using JIRA’s sprint board, or if they will come up with some other solution; but as their Scrum Master, I have appreciated the journey, the team’s willingness to experiment and try new things, and their creativity in overcoming the challenges they encountered.

We didn’t always use Scrum at Caktus - check out this blog post to learn how we got started.

Caktus GroupShipIt Day Recap Q3 2017

Caktus recently held the Q3 2017 ShipIt Day. Each quarter, employees take a step back from business as usual and take advantage of time to work on personal projects or otherwise develop skills. This quarter, we enjoyed fresh crêpes while working on a variety of projects, from coloring books to Alexa skills.

Technology for Linguistics

As both a linguist and a developer, Neil looked at using language technology for a larger project led by Western Carolina University to revitalize Cherokee. This polysynthetic language presents challenges for programming due to its complex word structure.

Using finite state morphology with hfst and Giellatekno, Neil explored defining sounds, a lexicon, and rules to develop a model. In the end, he feels a new framework could help support linguists, and says that Caktus has shown him the value of frameworks and good tooling that could be put to use for this purpose.

Front-end Style Guide Primer

front-end style guide Although design isn’t optional in product development, the Agile methodology doesn’t address user interface (UI) or user experience (UX) design. We use Agile at Caktus, but we also believe in the importance of solid UX in our projects.

Basia, Calvin, and Kia worked to fill the gap. They started building a front-end style guide, with the intention to supply a tool for Caktus teams to use in future projects. Among style guide components considered during this ShipIt Day were layout, typography, and color palettes. Calvin worked to set up the style guide as a standalone app that serves as a demo and testbed for ongoing style guide work. Kia explored the CSS grid as a flexible layout foundation that makes building pages easier and more efficient while accommodating a range of layout needs. Basia focused on typography, investigating responsive font sizing, modular scale, and vertical rhythm. She also started writing color palettes utilizing colors functions in Stylus.

Front-end style guides have long been advocated by Lean UX. They support modular design, enabling development teams to achieve UI and UX consistency across a project. We look forward to continuing this work and putting our front-end style guide into action!

Command Line Interface for Tequila

Jeff B worked on a command line interface to support our use of Tequila. While we currently use Fabric to execute shell commands, it’s not set up to work with Python 3 at the time of writing. Jeff used the Click library to build his project and incorporated difflib from the standard library in order to show a git-style diff of deployment settings. You can dig into the Tequila CLI on the Caktus GitHub account and take a look for yourself!

Wagtail Calendar

Caktus has built several projects using Wagtail CMS, so Charlotte M and Dmitriy looked at adding new functionality. Starting with the goal of incorporating a calendar into the Bakery project, they added an upcoming events button that opens a calendar of events, allowing users to edit and add events.

Charlotte integrated django-scheduler events into Wagtail while Dmitriy focused on integrating the calendar widget onto the EventIndexPage. While they encountered a few challenges which will need further work, they were able to demonstrate a working calendar at the end of ShipIt Day.

Scrum Coloring Book

Charlotte F and Sarah worked together to create a coloring book teaching Scrum information, principles, and diagrams in an easily-digested way. The idea was based on The Scrum Princess. Their story follows Alex, a QA analyst who joins a development team, through the entire process of completing a Scrum project.

Drafting out the Caktus Scrum coloring book.

Over the course of the day, they came up with the flow of the narrative, formatted the book so that all the images are on separate pages with story text and definitions/image to color. Any illustrators out there who want to help it come to life?

QA Test Case Tools

Gerald joined forces with Robbie, to follow up on Gerald’s project from our Q2 2017 ShipIt Day. This quarter, our QA analysts tinkered with QMetry, adding it to JIRA to see whether this could be the tool to take Caktus’ QA to the next level.

QMetry creates visibility for test cases related to specific user stories and adds a number of testing functions to JIRA, including the ability to group different scenarios by acceptance criteria and add bugs from within the interface when a test fails. Although there are a few configuration issues to be worked out, they feel that this tool does most of what they want to do without too much back-and-forth.

Wagtail Content Chooser

Phil also took the chance to do some work with Wagtail. Using the built-in page-chooser as a guide, he developed a content-chooser that shows all of the blocks associated with that page’s StreamFields. The app can get a content block with its own unique identifier and would enable the admin user to pull that content from other pages into a page being worked on. Next steps will be incorporating a save function.

Publishing an Amazon Alexa Skill

For those seeking inspiring quotes, David authored a skill for Amazon Alexa which would return a random quote from forismatic. An avid fan of swag socks, David came across the opportunity to earn some socks (and an Echo Dot) from Amazon if he submitted an Alexa skill and got it certified. He used the Flask app Flask-Ask to develop the skill rapidly, deployed it to AWS Lambda via Zappa, and is now awaiting certification (and socks). Caktus is an AWS Consulting Partner, so acquiring AWS Alexa development chops would present another service we could offer to clients.

Catching Up on Conferences

Dan caught up on videos of talks from conferences:

He also looked at the possibility of building a new package that preprocesses JavaScript and CSS, but after starting work he realized there’s a reason why existing packages are complicated and resolved to revisit this another time.

That’s all for now!

Although the ShipIt Day projects represent time away from client work, each project helps our team learn new skills or strengthen existing ones that will eventually contribute toward external work. We see it as a way to invest in both our employees and the company as a whole.

To see some of our skills at work in client projects, check out our Case Studies page.

Caktus GroupTransitioning to Scrum: Mapping Job Titles to Scrum Roles

Early in your transition to Scrum, you will be faced with a hard truth: your team or organization has job titles and Scrum has roles, and there is probably little to no overlap between the two. How do you map Susan, lead technical architect, and Tom, project manager, to the three Scrum roles: product owner, Scrum master, and developer?

Depending on the resources driving your transition, you’ll find some ready-made solutions at your fingertips: product managers and strategists become product owners, project managers become Scrum masters, and all the other actors become developers. Easy, problem solved, you might think. Susan is now a developer and Tom is a Scrum master. Stick a fork in this transition because it’s done.

I suggest a different approach. Instead of trying to map titles directly to roles, map people to roles. Take a deeper dive into the Scrum roles: What characteristics does a Scrum master need? What authority do they need? Once you’ve figured that out, which person - not title - best matches the needs of the role?

The Product Owner

The Scrum role of product owner (PO) has the following core responsibilities:

  • Maintain the vision of the product
  • Manage trade-offs in scope, schedule, budget, and quality
  • Own the product backlog
  • Empowered to make decisions
  • Define acceptance criteria and verify that they are met
  • Collaborate with the development team and all stakeholders

Additionally, a good product owner has the following characteristics:

  • Domain knowledge
  • Good communicator
  • Good negotiator
  • Great at building and managing relationships
  • Powerful motivator
  • Willing to make hard and/or unpopular decisions
  • Available to the team

Take a look at the people you have available. Who can best fulfill these responsibilities and has all the necessary characteristics? Pro tip: If someone checks all the boxes except availability, keep looking. A Scrum team with an absent or remote PO is not going to be nearly as effective as a team with a readily and consistently available PO.

The Scrum master

The core responsibilities of a Scrum master (SM) are to:

  • Lead the team by serving them (servant leadership)
  • Coach
  • Shield team from interference
  • Resolve and remove impediments
  • Act as an agent of change

A good SM also has the following characteristics:

  • Knowledgeable about Agile and Scrum
  • Questioning
  • Patient and steady
  • Collaborative
  • Protective of the team
  • Transparent in their communications

There’s also an additional consideration for the Scrum master role, and that’s the lack of command and control. A Scrum master should not be commanding or controlling; they don’t tell team members what to do, and they don’t control what team members work on or how they work. Which person on your team best fits this role? It’s likely that the best candidate is not your project manager. (After all, what PM is happy not being in control?) And don’t forget that your SM can be a developer if they are the best person for the role (and are suited to wearing multiple hats at once). If you don’t have a suitable candidate for the SM role, it would be better to hire a trained and experienced Scrum master rather than placing an unsuitable person into the role.

The Developer

And what about the developer role? The Scrum Guide defines the Development Team as “professionals who do the work of delivering a potentially releasable Increment of ‘Done’ product at the end of each Sprint”. So ask yourself: who is making the product? You’ll probably come up with a collection of folks with varying job titles, like developer, programmer, quality assurance, architect, artist, designer, etc. Congratulations, all those folks are now in the developer role in Scrum!

Stay Focused

Looking for people with suitable characteristics for each Scrum role may take longer than mapping based on job titles, but it’s worth the effort. If you stay focused on people during your transition, you’ll end up with a smoother transition, happier people, and more productive teams. Find out more about transitioning to Scrum by reading about how we did it at Caktus.

Caktus GroupFrom User Story Mapping to High-Level Release Plan

At Caktus, we begin many projects with a discovery workshop. A discovery workshop is an opportunity for our product team to get together with client stakeholders in order to answer three questions:

  • What is the problem we are trying to solve?
  • For whom are we solving this problem?
  • How are we going to solve the problem?

This blog post on product discovery outlines ways to help determine the problem to be solved and answer the question of for whom we are solving the problem.

In short, when discussing the problem to be solved, we talk about:

  • Business goals
  • Project goals
  • Potential constraints and risks
  • Success criteria

To find out for whom we are solving the problem we:

  • Define user roles for the application
  • Discuss user goals
  • Identify user pain points

Finally, to identify how we are going to solve the problem, we map out user flows and tasks in an activity called user story mapping.

User Story Mapping

User story mapping is a visualization technique popularized by Jeff Patton that allows product teams to map out an entire application with respect to the different user roles the application must support.

The activity begins by identifying top-level user actions (or user outcomes), writing them out on sticky notes, and arranging them into a row at the top of the user story map. We refer to that top level row as the narrative flow or the backbone of the user story map.

Top-level user actions mapped out.

If you imagine building a to-do list application, the narrative flow could include user outcomes such as:

  • Manage my account
  • Manage my to-do list
  • Share my to-do list

Once the high-level tasks have been identified and represented in the narrative flow, we move on to identify detail tasks, subtasks, and alternative ways of accomplishing a task. To distinguish detailed tasks from the narrative flow in the user story map, we write them out on different color sticky notes and add them to the user story map under the relevant high-level tasks.

A user story map indicating subtasks under the main tasks.

In the case of this imaginary to-do list application, under “Manage my account,” we could list detail tasks such as:

  • Create my account
  • Edit my account
  • Delete my account

and subtasks such as:

  • Edit my contact information
  • Edit my password
  • Edit my avatar

After the entire application is mapped out in this way, we identify a list of most valuable features. We do that by asking stakeholders which features the application can go live without and still deliver its essential business and user value. We draw a prioritization line across the map, consider each user story in the map, and move sticky notes that represent non-essential stories (or features) under the priority line.

A user story map with priority line indicating the most valuable features.

The user story mapping activity leads us to a planned-out application and a list of most valuable features. The prioritized user story map also becomes the first iteration of the project backlog. (A backlog is a list of features or technical tasks that are necessary and sufficient to complete a project.)

Writing User Stories

After the discovery workshop, we translate every sticky note from the map into a properly structured user story. In Agile software development, a user story is a brief description of a desired feature that is written from the perspective of an end-user, and that captures user outcomes that the feature is meant to support. A user story follows a prescribed format:

As a [user type], I want [feature] so that [benefit].

We write user stories as a team on index cards and assign acceptance criteria to each of them. (Acceptance criteria are conditions that a feature must satisfy to be accepted as done or completed.) User stories are then estimated by the development team. There are a variety of Agile estimation techniques available. We generally use Planning Poker at Caktus but at the beginning of a project, there are too many backlog items to estimate for Planning Poker to be effective. We have found that in those cases, Relative Mass Valuation works well. Using this technique, the team first arranges the user stories in order of their relative size, from small to large level of effort, and then assigns story points to each one using a modified Fibonacci sequence. The result is a fully estimated initial product backlog, which will allow the product owner to create a release plan.

Here is what a set of estimated user stories could look like:

Estimating user stories at Caktus

Creating a High-Level Release Plan

The product owner ranks the estimated user stories by priority, taking into account the business value and relative effort of each one, to best take advantage of the development time available. If the team’s velocity is already known, the product owner can divide the major features into rough sprints to create an initial release plan:

An example of a high-level release plan at Caktus.

The product backlog, and by extension the release plan, evolve constantly as the project progresses: priorities change, scope is added or reduced as feedback is gathered, stories are broken down into smaller ones, etc. As long as new backlog items are estimated and prioritized, the product owner can adjust the release plan to maintain a realistic release timeframe.

Conclusion

The process from user story mapping through writing and estimating the user stories gives development teams a foundation on which to base the development effort. User story mapping is a good way to determine what user tasks must be supported and how they break down into subtasks, as well as which user tasks are not essential for the application to deliver on business and user value. Writing user stories as a team is an opportunity to articulate each story in more detail and spread the knowledge among all members of the team. Finally, estimating user stories with the Relative Mass Valuation technique is an efficient way of sizing many stories in one estimation session by comparing them to each other.

We have found the process useful, but we have also learned some lessons:

  • During user story mapping, the stakeholders’ understanding of the project may evolve and by the end of the activity, the user stories identified at the beginning may change accordingly. In these cases, it is important to revisit those stories at the end of the discovery workshop to confirm or adjust them in light of the newly gained understanding.
  • Writing user stories with team members who have not participated in the discovery workshop can be challenging. In future, we may include a separate workshop debrief session to bring the entire team up to speed on the findings from the discovery workshop before we set out to write user stories.
  • A high-level release plan can be a helpful tool offering an initial timeline for the product release. However, it can become an impediment if its transient nature is not fully understood. In Agile software development, it’s paramount that a high-level release plan such as the one shown here not be treated as a definitive schedule, but rather as an initial take on a possible order of work. As soon as the work begins, that order will change as new information about the project is revealed through the development process.

To learn more about UX techniques used at Caktus, read Product Discovery Part 1: Getting Started or Product Discovery Part 2: From User Contexts to Solutions.

Caktus GroupIs Django the Right Fit for your Project?

You need a website. You may have done some searches, and come back overwhelmed and empty-handed, having found hosting providers offering services that may have sounded familiar (“WordPress”) and ones that may have sounded like a foreign language (“cPanel”). You may have considered hiring someone to build your website, and gotten conflicting answers about what you need and what it would cost. You may have also heard about Django, but you're not sure how it it fits into the picture and whether or not it's the right fit for your project. This is common, because there are many different types of websites out there. To help answer the question of whether Django is the right fit for your project, let’s take a look at the landscape.

Figuring out your needs

Most websites fall into one of three categories: Static, Dynamic, or Interactive. Static sites are ones which don’t change much at all; these are typically websites for small, local businesses, listing things such as address, hours, and phone number. Dynamic websites, which are more common, have a static structure but changing content such as a news feed, blog, or pricing which needs to be updated often. A dynamic website may even have a store embedded, where users can make online purchases. At its core, though, the business generates the updates to a dynamic website; visitors simply use what is there. An interactive website, on the other hand, provides many more opportunities for user interaction. Social media websites are interactive, with users creating content (posts) and interacting with others’ content. Dynamic and interactive websites need a content management system (like WordPress or Drupal), or a more custom solution (like Django).

What is a Content Management System?

If you’ve looked into creating a website, you may have heard the term “Content Management System” or “CMS” thrown around. I’ll explain how this fits in by using the analogy of getting a house ready to move in. A static website would be analogous to a furnished apartment, where all the resident needs to do is show up. A CMS, on the other hand, is a fully-built house, but there’s no paint on the walls yet, and there’s no furniture. You’ll need to provide these niceties before you can move in, but you don’t need the expertise of a builder in order to get it ready. Maybe you’ll hire a designer to take care of some of it, or help with some of the decisions, but most people can manage this and do an acceptable job.

That’s pretty much what a CMS is: a website that’s pre-built, but needs that coat of paint, furniture, and some pictures on the walls. A web designer might help you with this, or even do some of that work, but many people can manage this on their own in a pinch. Once set up, a non-technical website owner can add and manage their content there. You may have heard of some common CMS options; WordPress and Drupal are some of the more popular ones. Lots of dynamic websites built today use one type of CMS or another. Even many static websites are now being built using a CMS; the website content may not need to change more than once every year or two, but it’s still nice not to need a developer to change the code directly.

How Django compares to a typical CMS system

While WordPress and Drupal are established platforms that can be used to create solid dynamic websites, they are all built around being a CMS first. The result of this is that building in interactive content can be a headache, since the frameworks weren’t really built for users to do much more than browse.

To return to our analogy, if a CMS is a pre-built house that’s missing the paint and furniture, Django is instead the pile of lumber, nails, tools, and other supplies needed to assemble that house. Building a house from those components is certainly not the sort of task that the average homeowner is comfortable taking on, but it has a distinct advantage if the homeowner needs something particularly custom, and that’s exactly where Django shines: in custom website creation.

While Django can be used to create a seamless dynamic website, its flexibility really pays off when building sites that are interactive, or which straddle the boundary between dynamic and interactive. The advantages of Django are numerous, from the vast diversity of Python libraries available (since Django is a Python framework), to the flexibility written into Django itself. If you’re curious to dig into the details of this, we’ve written in much more depth about why we use Django.

Conclusion

If you know that you’ll only ever need a CMS, and the most complex bit of interactivity you’ll need is an online store, then you can probably meet your needs using something like WordPress or Drupal. But if you want the ability to be flexible and add a lot of user interaction like posts, forums, or account management to your website, you’ll probably be better off with a Django solution.

Caktus has been building custom Django websites and apps since 2007. We’ve developed a success model for developing websites the right way and are always happy to chat about your project if you’re still not sure that Django is the right fit for you.

Caktus GroupUpgrading from Wagtail 1.0 to Wagtail 1.11

There are plenty of reasons to upgrade your Wagtail site. Before we look at how to do it, let’s take a look at a few of those reasons.

Why upgrade your Wagtail site?

  • Wagtail is now compatible with Django 1.11 and Python 3.6, so you can use the latest versions (at the time of this blog post) of all three together.
  • Page Revision Management was released in Wagtail 1.4, allowing users to preview and rollback revisions.
Page revision management in Wagtail

Image from http://docs.wagtail.io/

The Wagtail user bar

The new Wagtail Userbar shown with the top-left configuration, note it does not conflict with Django Debug Toolbar.

  • Streamfield was already really nice, but the addition of TableBlock looks useful for easily editing tabular data.
  • Page-level permissions for logged-in users belonging to specific groups are now possible via the new Page Privacy Options
  • Wagtail now supports many-to-many relations on the Page model.
  • If you’re using PostgreSQL, you can use the built-in PostgreSQL search engine rather than ElasticSearch.
  • Finally, with the June 2017 release of Wagtail 1.11, the Wagtail team updated the Wagtail Explorer with the new admin API and React components. The explorer is now faster to use, includes all of the pages in your site (not just parent pages), and lets you edit a page in fewer steps.

How I ported a Wagtail 1.0 site to Wagtail 1.11

Now that we’ve had a look at the features gained from updating, let’s see how to update.

I decided to port a Wagtail 1.0 project to Wagtail 1.11. I was able to upgrade from 1.0 to 1.11 directly, rather than upgrading version by version (which is a slower process), with a few changes along the way.

To start, I went ahead and created a brand new local virtual environment on my laptop. I pip installed all the current requirements for my Wagtail 1.0 project, and then updated Wagtail.

$(newwagtailenv) pip install -r requirements/dev.txt
$(newwagtailenv) pip install wagtail==1.11

Because we’re tracking versions of our requirements in a file, I updated the versions that required updates from the Wagtail update. This included updates to django-taggit and django-modelcluster among some other new requirements.

I assumed that data migrations would be required for this Wagtail upgrade. When I ran migrate, I encountered an issue right away.

$(newwagtailenv) python manage.py migrate
$(newwagtailenv) ... in bind_to_model
related = getattr(model, self.relation_name).related
TypeError: getattr(): attribute name must be string

I found this post to help me solve the issue. Going forward, I noticed, the Wagtail core team recommends using Stack Overflow to research Wagtail questions.

The error was caused because I was using an older style of the InlinePanel definitions with the page model as the first parameter. Because that style was deprecated in Wagtail 1.2, I needed to make a few code changes like this one:

Change:

InlinePanel(CaseStudyPage, 'countries', label="Countries"),

To:

InlinePanel('countries', label="Countries"),

The next error I saw when I tried to migrate had to do with tuples and lists.

$(newwagtailenv) python manage.py migrate
$(newwagtailenv) index.SearchField('intro'),
$(newwagtailenv) TypeError: can only concatenate list (not "tuple") to list

For the 1.5 release of Wagtail, the search_fields attribute on the Page models (and other searchable models) changed from a tuple to a list.

This was another pretty simple fix.

Change:

class MyPage(Page):
    ...
    search_fields = Page.search_fields + (
        index.SearchField('intro'),
    )

To:

class MyPage(Page):
    ...
    search_fields = Page.search_fields + [
        index.SearchField('intro'),
    ]

At this point, I was able to successfully run python manage.py migrate. I gave my test suite a try and it ran successfully, so I tested the site out locally as well. It worked beautifully.

That’s all I had to do! But I decided to do one last thing anyway.

I was excited about the fact that Wagtail solved the issue of not having many-to-many fields on the Page model in version 1.9. I read up on the new ParentalManyToManyField and made a plan to use it because having one fewer model meant that there would be less code to maintain long-term. Paying down some technical debt now meant that future developers who were maintaining this Wagtail site wouldn’t have to spend time researching an older work-around in order to get up to speed, and is generally considered best practice.

When we originally built this Wagtail site, I used the “through model” workaround described in this issue by defining three separate models for each many-to-many relationship. For instance, I had a CaseStudyPage, based off the Page model, the Country model, and a through model called CountryCaseStudy that created the many-to-many relationship between CaseStudyPage and Country.

Here’s how to move from the “through model” method to the new ParentalManyToManyField, including how to port the data:

Moving from the through model to the newly available many-to-many relationship via the ParentalManyToManyField

I wanted to move from the through model to the newly available many-to-many relationship via the ParentalManyToManyField.

  • Create a new field to replace the through model implementation on the Page model (CountryCaseStudy in this case) called countries_new
countries_new = ParentalManyToManyField('portal_pages.CaseStudyPage', blank=True)
  • Make a new migration file for this new field so it retains the old data before ripping out the old models.
$ python manage.py makemigrations
  • Create a new data migration file to copy data from the through model to countries_new.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals

from django.db import models, migrations


# Loop through all Case Study pages and save the countries
# to the new ParentalManyToManyField

def no_op(apps, schema_editor):
    # Do nothing on reversal to data
    pass

def save_countries_to_new_parental_m2m(apps, schema_editor):
    # Need to import the actual model here
    # versus the "fake" model
    # so that the Clusterable model logic works and we can
    # successfully save the ParentalManyToManyField
    from portal_pages.models import CaseStudyPage

    for csp in CaseStudyPage.objects.all():
        csp.countries_new = [country.country for country in csp.countries.all()]
        csp.save()


class Migration(migrations.Migration):

    dependencies = [
        ('portal_pages', '0055_casestudypage_countries_new'),
        ('wagtailcore', '0038_make_first_published_at_editable'),
    ]

    operations = [
        migrations.RunPython(save_countries_to_new_parental_m2m, no_op),
    ]
  • Update existing code that used the through model to use the new field instead
  • Delete the through model now that it’s no longer needed
  • Run makemigrations and migrate again

The trickiest part for me was moving data to the newly implemented ParentalManyToManyField. Because the Page model is a Clusterable model, I needed to import the current model class rather than use the historical model state. I spent a little time figuring that out and have to thank Matthew Westcott, who guided me in the right direction from the Wagtail Slack channel.

You can see the updates I made on GitHub on the RapidPro Community Portal wagtail-updates branch. There is still more work to be done and we hope to complete it soon.

Conclusion

The Wagtail CMS has really come into its own as a beautiful and easy-to-use content management system. I highly recommend keeping your Wagtail site up to date to take advantage of all the newest features. To read more about Caktus and Wagtail, check out this post about adding pages outside of the CMS or this one about our participation in Wagtail Sprints.

Caktus GroupCaktus at DjangoCon2017

In less than a month we’ll be heading out to Spokane, WA for DjangoCon 2017. We’re proud to be attending as sponsors for the eighth year, and look forward to greeting everyone at our booth. On August 16th, we’ll be raffling off a GoPro Session action camera, so be sure to stop by and enter. We’ll also have our comfy new t-shirts and some limited-edition Caktus 10th Anniversary water bottles to give away. They went fast at PyCon, so don’t wait to get yours.

Swag and giveaways for the Caktus DjangoCon booth

As part of our commitment to sharing quality Django content with the community, we’ll also be offering a survey at the booth to find out what you, our audience, are interested in seeing more of. We hope you’ll help us out! If you can’t make it to DjangoCon but still want to participate, you can take the survey on Ona.

Speakers

One of our very own developers will be speaking at DjangoCon this year! We’re excited that Charlotte Mays was selected to speak about writing APIs for almost anything, in which she’ll cover the power and flexibility of Django Rest Framework.

Caktus developer Charlotte Mays delivering a talk

Congratulations to Charlotte! We hope you’ll all go have a listen on Monday, August 14th at 5:30pm.

Talks

In addition to Charlotte’s talk, Caktus developers have quite the list of what they’re excited to see, including:

See you there!

Working on a Django web or SMS project and looking for help? We’d love to see if we can help with team augmentation, a discovery workshop, or start-to-finish custom development. Contact us to set up a dedicated time to talk.

Caktus GroupConstructive Code Review (Bonus PyCon 2017 Must-See Talk)

There were so many good talks this year that we're including a bonus entry in the 2017 edition of our annual PyCon Must-See Series, highlighting the talks our staff especially loved at PyCon.

Erik Rose’s talk “Constructive Code Review” is on the surface a talk about how to do just that: review code in a way that builds people up rather than tearing them down. However, in 40 minutes he manages to cover a breadth of topics relevant to anyone who works with other people, including (but not limited to): simple rules to assist you in maintaining constructive communications, tips on how to ensure you receive the feedback you want, methods to manage your emotional state, stress management, a three-step approach to training new people, and ideas on how to build trust. I found this talk so helpful that I’ve watched it twice and taken detailed notes, and recommended it to my teams to watch as well. Highly recommended, whether you code for a living or not!

Caktus GroupReadability Counts (PyCon 2017 Must-See Talk 6/6)

Part 6 in the 2017 edition of our annual PyCon Must-See Series, highlighting the talks our staff especially loved at PyCon. While there were many great talks, this is our team's shortlist.

"Readability Counts" was a good talk about why your code should be readable and how you get it there. One of the things I appreciated was that while it was very developer-focused, it was human-oriented rather than technical.

In his presentation, Trey Hunner shared four reasons why code should be readable:

  • It makes your life easier
  • Code is more often read than written
  • It is easier to maintain readable code
  • It’s easier to onboard new team members

He also shared a few best practices to achieve this, including usage of white space, line breaks, and code structure; descriptive naming; and choosing the right construct and coding idioms.

Caktus GroupPython Tool Review: Using PyCharm for Python Development - and More

Back in 2011, I wrote a blog post on using Eclipse for Python Development.

I've never updated that post, and it's probably terribly outdated by now. But there's a good reason for that - I haven't used Eclipse in years. Not long after that post, I came across PyCharm, and I haven't really looked back.

Performance

Eclipse always felt sluggish to me. PyCharm feels an order of magnitude more responsive. Sometimes it takes a minute to update its indices after I've checked out a new branch of a very large project but usually, even that is barely noticeable. Once the indices are updated, everything is very fast.

Responding quickly is very important. If I'm deep in a problem and suddenly have to stop and wait for my editor to finish something, it can break my concentration and end up slowing me down much more than you might expect simply because an operation took a few seconds longer than it should.

It's not just editing that's fast. I can search for things across every file in my current project faster than I can type in the search string. It's amazing how useful that simple ability becomes.

Python

PyCharm knows Python. My favorite command is Control-B, which jumps to the definition of whatever is under the cursor. That's not so hard when the variable was just assigned a constant a few lines before. But most of the time, knowing the type of a variable at a particular time requires understanding the code that got you there. And PyCharm gets this right an astonishing amount of the time.

I can have multiple projects open in PyCharm at one time, each using its own virtual environment, and everything just works. This is another absolute requirement for my workflow.

The latest release even understands Python type annotations from the very latest Python, Python 3.6.

Django

PyCharm has built-in support for Django. This includes things like knowing the syntax of Django templates, and being able to run and debug your Django app right in PyCharm.

Git

PyCharm recognizes that your project is stored in a git repo and has lots of useful features related to that, like adding new files to the repo for you and making clear which files are not actually in the repo, showing all changes since the last commit, comparing a file to any other version of itself, pulling, committing, pushing, checking out another branch, creating a branch, etc.

I use some of these features in PyCharm, and go back to the command line for some other operations just because I'm so used to doing things that way. PyCharm is fine with that; when I go back to PyCharm, it just notices that things have changed and carries on.

Because the git support is so handy, I sometimes use PyCharm to edit files in projects that have no Python code at all, like my personal dotfiles and ansible scripts.

Code checking

PyCharm provides numerous options for checking your code for syntax and style issues as you write it, for Python, HTML, JavaScript, and probably whatever else you need to work on. But every check can be disabled if you want to, so your work is not cluttered with warnings you are ignoring, just the ones you want to see.

Cross-platform

When I started using PyCharm, I was switching between Linux at work and a Mac at home. PyCharm works the same on both, so I didn't have to keep switching tools.

(If you're wondering, I'm always using Linux now, except for a few hours a year when I do my taxes.)

Documentation

Admittedly, the documentation is sparse compared to, say, Django. There seems to be a lot of it on their support web site, but when you start to use it, you realize that most pages only have a paragraph or two that only touch on the surface of things. It's especially frustrating to look for details of how something works in PyCharm, and find a page about it, but all it says is which key invokes it.

Luckily, most of the time you can manage without detailed documentation. But I often wonder how many features could be more useful for me but I don't know it because what they do isn't documented.

Commercial product

PyCharm has a free and a paid version, and I use the paid version, which adds support for web development and Django, among other things. I suspect I'm like a lot of my peers in usually looking for free tools and passing over the paid ones. I might not ever have tried PyCharm if I hadn't received a free or reduced-cost trial at a conference.

But I'm here to say, PyCharm is worth it if you write a lot of Python. And I'm glad they have revenue to pay programmers to keep PyCharm working, and to update it as Python evolves.

Conclusion

I'm not saying PyCharm is better than everything else. I haven't tried everything else, and don't plan to. Trying a new development environment seriously is a significant investment in time.

What I can say is that I'm very happy and productive using PyCharm both at work and at home, and if you're dissatisfied with whatever you're using now, it might be worth checking it out.

(Editor’s Note: Neither the author nor Caktus have any connection with JetBrains, the makers of PyCharm, other than as customers. No payment or other compensation was received for this review. This post reflects the personal opinion and experience of the author and should not be considered an endorsement by Caktus Group.)

Caktus GroupRequests, Under the Hood (PyCon 2017 Must-See Talk 5/6)

Part five of six in the 2017 edition of our annual PyCon Must-See Series, highlighting the talks our staff especially loved at PyCon. While there were many great talks, this is our team's shortlist.

My must-see talk this year was "Requests Under the Hood", in which Cory Benfield reveals some of the dark corners in the Requests library for Python. As one of the library’s core maintainers, Cory is in a unique position to share insights about how beautifully-written code intended for a specific problem becomes dirty over time as it is adapted to edge cases, workarounds, or hacks once deployed.

Cory respectfully sheds light on some of Requests’ most troubling code in an effort to provide teachable moments. He’s a natural speaker, so it made for an engaging presentation. It’s a great reminder for all developers to not rush to judgment when working with legacy code.

Og MacielJust What Is A Quality Engineer? Part 2

Picture of `Batman`_

The last time I wrote about Quality Engineering, I mentioned that some of the reasons why people are not familiar with this term are, in no particular order:

  • 'Quality' is usually something that is added as an after thought and doesn't really come into the picture, if ever, until the very end of the release process
  • Nobody outside of a QA team really knows what they do. It has something to do with testing...
  • Engineering is usually identified with skills related to writing code and designing algorithms, usually by a developer and not by QA

A quick search on Google shows the following results:

  • 104,000,000 hits for "Software Engineer"
  • 86,900,000 hits for “Quality Control”
  • 83,100,000 hits for “Quality Assurance”
  • 5,390,000 hits for “Quality Engineer”

As you can see, it is no wonder that whenever I say 'quality engineer' people always think that what I really meant to say was 'quality assurance' or 'quality control'. The term is just not that well-known! So in order to clarify what the difference is between these professions, today I'd like to talk a little bit about quality assurance and what I usually think whenever someone tells me that they either work in QA or have a 'QA team'.

Wikipedia tells us that the terms 'quality assurance' (QA) and 'quality control' (QC) are often used interchangeably to refer to ways of ensuring the quality of a service or product.

Furthermore,

"Quality assurance comprises administrative and procedural activities implemented in a quality system so that requirements and goals for a product, service or activity will be fulfilled. It is the systematic measurement, comparison with a standard, monitoring of processes and an associated feedback loop that confers error prevention." -- Wikipedia

That is quite a mouth full (the emphasized words are mine), but I feel that it does a good job at stating the following ideas:

  • Quality Assurance and/or Quality Control is used to assure the quality of a product, but there is no clear distinction as to when in the release process it should be used. In my experience, it usually happens when the product is close to being shipped!
  • Used to make sure that requirements (the what) are fulfilled (the how)
  • Used to measure, monitor and compare results against a standard
  • Used for error preventions (which to me denotes a reactive mode compared to a proactive mode)

In other words, those who do quality assurance for a living are involved in verifying that the final version of the product being tested delivers exactly what was designed with the expected behavior and outcome. It requires that the QA person fully understand what is being added to or changed in the product and, most importantly, what the end result should be. Testing is definitely a big part of the 'day to day' activities for someone in QA, which does provide useful information to create a positive feedback loop and hopefully increase error prevention.

Here's what I don't like about this whole business though:

Quality is something that must be part of all phases of a product and not at the very end of the process. A good QA person is usually so familiar with the product being tested that one could say that QA is the first customer a company has! If you have someone in your team who can fully understand how your product works, where the pain points are, knows at a glance if a new feature or a fix does not follow the existing standards, and has the ability to tell you if something doesn't feel right, would you want to hear this type of feedback at the very end? By then, can you really afford to put things on hold and re-design your product??? In my experience, the answer to this question has 99.99% of the time been 'No'.

Quality is the responsibility of everyone involved with a product and not only of those in QA! Everyone, document writers, translators, user experience (UX) experts, product managers, you name it, everyone should be in the business of delivering and assuring the quality of the product! If you bought something, would you be OK with accepting mediocre user experience, documentation, features and translations? I doubt it.

Monitoring and measuring how a product compares against some set of standardized benchmarks is definitely important but as customers request more and more new features and the product's complexity increases, are your benchmarks also keeping up with all these changes? More importantly, since you are the one using the product day and night, do you have any input into updating the benchmarks? I certainly hope so.

Lastly, if your job is to make sure that no product 'goes out the door' without a thorough validation, that it works as expected and that all known issues have been fixed, aren't you forgetting something? What about the issues that are not known yet? You may be thinking that I'm joking, but seriously. If all you do is prevent errors from being shipped to your customers, how about detecting them as early as possible to give all major stake-holders enough time to make a decision as to what should be done with them? Again, if you're catching them at the end of the release cycle, it could be too late.

If your company has a QA team, then you're already ahead of the game, since it is only when customer dissatisfaction is very high and the final numbers for the quarter start to look gloomy that people start paying attention to delivering quality. But it is not enough if you're only kicking the can down the road only to find yourself facing the same scenario later on! Quality, good quality, is what everyone in your team should be striving for... not some times, but all the time!

If you are in a QA team, do you ever feel like you're ahead of the game or feel like you're constantly playing catch up? Do you wish you could have a chance to catch issues as early as possible? Wouldn't you want to stop racing against the clock to get issues verified and have a shot at doing more exploratory testing and identify problems early on? Would you say 'no' to an opportunity to provide some insight into how the product could be improved and perhaps how some work-flows could be simplified to increase the usability?

It should be clear by now that quality is something that should be something systemic for any project or company who takes their customer satisfaction as their top priority! Sure you can test the product as much as you (or your QA team) can handle, but you'd be only treating the symptoms. Maintaining a 'quality first' mentality and improving existing processes to make sure that quality is an integral part of everyone's day to day activities is primordial if you really want to make a bigger impact!

This is when a Quality Engineer comes in! A Quality Engineer is someone who can actively and continuously keep driving improvements to the release cycle process and are in the unique position to help the entire team adopt these improvements so that everyone is using the same methodologies.

Next time I will then talk about quality engineering (QE), what it is, what it isn't, and how you should be either hiring more QE or, if you're in QA, how you should be working to become a QE!

As always, please let me know what your thoughts are on this topic as I'd live to get some constructive feedback!

Disclaimer: The opinions contained within this article are mine alone and do not necessarily represent the opinions of any entity whatsoever with which I have been, am now or will be affiliated.

Frank WierzbickiJython 2.7.1 final released!

On behalf of the Jython development team, I'm pleased to announce that the final release of Jython 2.7.1 is available! We thought 2017-07-01 was a perfect time to release version 2.7.1 :) This is a bugfix release. Bug fixes include improvements in ssl and pip support along with lots of improvements in CPython compatibility.

Please see the NEWS file for detailed release notes. This release of Jython requires JDK 7 or above.

This release is being hosted at maven central. There are three main distributions. In order of popularity:
To see all of the files available including checksums, go to the maven query for org.python+Jython and navigate to the appropriate distribution and version.

Og MacielJust What Is A Quality Engineer? Part 1

Picture of `Batman`_

Whenever I meet someone for the first time, after we get past the initial niceties typically involved when you meet someone for the first time, eventually the conversation shifts to work and what one does for a living. Inevitably I'm faced with what, at a first glance, may sound like a simple question and the conversation goes like this:

  • New acquaintance: "What do you do at Red Hat?"
  • Me: "I manage a team of quality engineers for a couple of different products."
  • New acquaintance: "Oh, you mean quality assurance, right? QA?"
  • Me: "No, quality engineers. QE."

What usually followed then was a lengthy monologue whereby I spent usually around ten to fifteen minutes explaining what the difference between QA and QE is and what, in my opinion, sets these two professions apart. Now, before I get too deep into this topic, I have to add a disclaimer here so not to give folks the impression that what I'm talking about is backed by any official definition or some type of professional trade organization! The following are my own definitions and conclusions, none of which were pulled out of thin air, but backed by (so far) 10 years of experience working on the field of delivering quality products. If there are formal definitions out there, and they match with my own, it is by pure coincidence.

Why the term 'Quality Engineer' is not well known I'm not sure, but I have a hunch that it may be related to something I noticed throughout the 10 years that I have spent on this field. In my personal experience, 'quality' is something that is not always considered as part of the creation of a new company, product or project. Furthermore, the term 'quality' is also not well defined or understood by those involved in actually attempting to 'get more' of it.

In my experience, folks usually forget about the word 'quality', whatever that may be, happily start planning and developing their new ideas/products and eventually ship it to their customers. If the customer complains that something is not working or performing as advertised or it doesn't meet their expectations, no problem. Someone will convey the feedback back to the developers, a fix will eventually be provided and off it goes to the customer. Have you ever seen this before? I have!

Eventually, assuming that the business is doing well and is attracting more paying customers, it is highly likely that support requests or requests for new features will increase. After all, who wants to pay for something that doesn't work as expected? Also, who doesn't want a new feature of their own either? Depending on the size of the company and the number of new requests going into their backlog, I'd expect that either one of the following events would then take place:

  • More tasks from the backlog would be added to individual's 'plates', or
  • New associates would be hired to handle the volume of tasks

I guess one could also stop accepting new requests for support or new features, but that would not make your customers happy, would it?

Regardless of the outcome, the influx of new tasks is dealt with and if things get out of control again, one could always try to get an intern or distribute tasks more evenly. Now, notice how the word 'quality' has not been mentioned yet? It is no accident that to solve an increase of more work, most often than not the number one solution used is to throw more resources at it. There's even a name for this type of 'solution': The Mythical Man-Month.

You see, sadly, 'quality' is something that usually only becomes important as an afterthought. It is the last piece added to the puzzle that comprises the machinery of delivering something to an end user. It is only when enough angry and unsatisfied paying customers make enough noise about the unreliability or usability of the product that folks start asking: "Was this even tested before being put on the market?"

If the pain being inflicted by customer feedback is sharp enough, a Quality Assurance (QA) team is hastily put together. Most of the time in my experience, this is a Team of One usually made up of one of the developers who after being dragged kicking and screaming from his cubicle, eventually is beat into accepting his new role as a button pusher, text field filler, testing guy. Issues are then assigned to him and a general sense of relief is experienced by all. Have you also seen this before? I have! I'm 2 for 2 so far!

The idea is that by creating a team of one to sit in the receiving end of the product release cycle, nothing would get shipped until some level of 'quality' is achieved. The fallacy with this statement, however, is that no matter how agile your team may be, the assurance of the quality for a product somehow is still part of a waterfall model. Wouldn't it be better if problems were caught as early as possible in the process instead of waiting until the very end? To me that is a no brainer but somehow the process of testing a product is still relegated to the very end, usually when the date for the release is just around the corner.

Why is the term Quality Engineer not well known then? I feel that the answer is comprised of several parts:

  • 'Quality' doesn't come into the picture, if ever, until the very end of the game;
  • If there is a QA team, nobody outside of that team really knows what they do. It has something to do with testing...
  • Engineering is usually identified with skills related to writing code and designing algorithms, usually by a developer and not by QA;

No surprise that quality engineering is something foreign to most!

OK, so what is a Quality Engineer then? Glad you asked! The answer to that I shall provide in a subsequent post, as I still need to cover some more ground and talk about what 'quality' is, what someone in QA does and finally what is a QE!

My next article will continue this journey through the land of Quality and Engineering, and in the meantime, please let me know what you think about this subject.

Caktus GroupManaging your AWS Container Infrastructure with Python

We deploy Python/Django apps to a wide variety of hosting providers at Caktus. Our django-project-template includes a Salt configuration to set up an Ubuntu virtual machine on just about any hosting provider, from scratch. We've also modified this a number of times for local hosting requirements when our customer required the application we built to be hosted on hardware they control. In the past, we also built our own tool for creating and managing EC2 instances automatically via the Amazon Web Services (AWS) APIs. In March, my colleague Dan Poirier wrote an excellent post about deploying Django applications to Elastic Beanstalk demonstrating how we’ve used that service.

AWS have added many managed services that help ease the process of hosting web applications on AWS. The most important addition to the AWS stack (for us) was undoubtedly Amazon RDS for Postgres, launched in November 2013. As long-time advocates for Postgres, this addition to the AWS suite was the final puzzle piece necessary for building an AWS infrastructure for a typical Django app that requires little to no manual management. Still, the suite of AWS tools and services is immense, and configuring these manually is time-consuming and error-prone; despite everything it offers, setting up "one-click" deploys to AWS (à la Heroku) is still a complex challenge.

In this post, I'll be discussing another approach to hosting Python/Django apps and managing server infrastructure on AWS. In particular, we'll be looking at a Python library called troposphere that allows you to describe AWS resources using Python and generate CloudFormation templates to upload to AWS. We'll also look at a sample collection of troposphere scripts I compiled as part of the preparation for this post, which I've named (at least for now) AWS Container Basics.

Introduction to CloudFormation and Troposphere

CloudFormation is Amazon's answer to automated resource provisioning. A CloudFormation template is simply a JSON file that describes AWS resources and the relationships between them. It allows you to define Parameters (inputs) to the template and even includes a small set of intrinsic functions for more complex use cases. Relationships between resources are defined using the Ref function.

Troposphere allows you to accomplish all of the same things, but with the added benefit of writing Python code rather than JSON. To give you an idea of how Troposphere works, here's a quick example that creates an S3 bucket for hosting (public) static assets for your application (e.g., in the event you wanted to host your Django static media on S3):

from troposphere import Join, Template
from troposphere.s3 import (
    Bucket,
    CorsConfiguration,
    CorsRules,
    PublicRead,
    VersioningConfiguration,
)

template = Template()
domain_name = "myapp.com"

template.add_resource(
    Bucket(
        "AssetsBucket",
        AccessControl=PublicRead,
        VersioningConfiguration=VersioningConfiguration(Status="Enabled"),
        DeletionPolicy="Retain",
        CorsConfiguration=CorsConfiguration(
            CorsRules=[CorsRules(
                AllowedOrigins=[Join("", ["https://", domain_name])],
                AllowedMethods=["POST", "PUT", "HEAD", "GET"],
                AllowedHeaders=["*"],
            )]
        ),
    )
)

print(template.to_json())

This generates a JSON dump that looks very similar to the corresponding Python code, which can be uploaded to CloudFormation to create and manage this S3 bucket. Why not just write this directly in JSON, one might ask? The advantages to using Troposphere are that:

  1. it gives you all the power of Python to describe or create resources conditionally (e.g., to easily provide multiple versions of the same template),
  2. it provides compile-time detection of naming or syntax errors, e.g., via flake8 or Python itself, and
  3. it also validates (most of) the structure of a template, e.g., ensuring that the correct object types are provided when creating a resource.

Troposphere does not detect all possible errors you might encounter when building a template for CloudFormation, but it does significantly improve one's ability to detect and fix errors quickly, without the need to upload the template to CloudFormation for a live test.

Supported resources

Creating an S3 bucket is a simple example, and you don't really need Troposphere to do that. How does this scale to larger, more complex infrastructure requirements?

As of the time of this post, Troposphere includes support for 39 different resource types (such as EC2, ECS, RDS, and Elastic Beanstalk). Perhaps most importantly, within its EC2 package, Troposphere includes support for creating VPCs, subnets, routes, and related network infrastructure. This means you can easily create a template for a VPC that is split across availability zones, and then programmatically define resources inside those subnets/VPCs. A stack for hosting an entire, self-contained application can be templated and easily duplicated for different application environments such as staging and production.

AWS managed services for a typical web app

AWS includes a wide array of managed services. Beyond EC2, what are some of the services one might need to host a Dockerized web application on AWS? Although each application is unique and will have differing managed service needs, some of the services one is likely to encounter when hosting a Python/Django (or any other) web application on AWS are:

  • S3 for storing and serving static and/or uploaded media
  • RDS for a Postgres (or MySQL) database
  • ElastiCache, which supports both Memcached and Redis, for a cache, session store, and/or message broker
  • CloudFront, which provides edge servers for faster serving of static resources
  • Certificate Manager, which provides a free SSL certificate for your AWS-provided load balancer and supports automatic renewal
  • Virtual Private Clouds (VPCs) for overall network management
  • Elastic Load Balancers (ELBs), which allow you to transparently spread traffic across Availability Zones (AZs). These are managed by AWS and the underlying IPs may change over time.

Provisioning your application servers

For hosting a Python/Django application on AWS, you have essentially four options:

  • Configure your application as a set of task definitions and/or services using the AWS Elastic Container Service (ECS). This is a complex service, and I don't recommend it as a starting point.
  • Create an Elastic Beanstalk Multicontainer Docker environment (which actually creates and manages an ECS Cluster for you behind the scenes). This provides much of the flexibility of ECS, but decouples the deployment and container definitions from the infrastructure. This makes it easier to set up your infrastructure once and be confident that you can continue to use it as your requirements for running additional tasks (e.g., background tasks via Celery) change over the lifetime of a project.
  • Configure an array of EC2 instances yourself, either by creating an AMI of your application or manually configuring EC2 instances with Salt, Ansible, Chef, Puppet, or another such tool. This is an option that facilitates migration for legacy applications that might already have all the tools in place to provision application servers, and it's typically fairly simple to modify these setups to point your application configuration to external database and cache servers. This is the only option available for projects using AWS GovCloud, which at the time of this post supports neither ECS nor EB.
  • Create an Elastic Beanstalk Python environment. This option is similar to configuring an array of EC2 instances yourself, but AWS manages provisioning the servers for you, based on the instructions you provide. This is the approach described in Dan's blog post on Amazon Elastic Beanstalk.

Putting it all together

This was originally a hobby / weekend learning project for me. I'm much indebted to the blog post by Jean-Philippe Serafin (no relation to Caktus) titled How to build a scalable AWS web app stack using ECS and CloudFormation, which I recommend reading to see how one can construct a comprehensive set of managed AWS resources in a single CloudFormation stack. Rather than repeat all of that here, however, I'm going to focus on some of the outcomes and potential uses for this project.

Jean-Philippe Serafin provided all the code for his blog post on GitHub. Starting from that, I've updated and released another project -- a workable solution for hosting fully-featured Python/Django apps, relying entirely on AWS managed services -- on GitHub under the name AWS Container Basics. It includes several configuration variants (thanks to Troposphere) that support stacks with and without NAT gateways as well as three of the application server hosting options outlined above (ECS, EB Multicontainer Docker, or EC2). Contributions are also welcome!

Setting up a demo

To learn more about how AWS works, I recommend creating a stack of your own to play with. You can do so for free if you have an account that's still within the 12-month free tier . If you don't have an account or it's past its free tier window, you can create a new account at aws.amazon.com (AWS does not frown on individuals or companies having multiple accounts, in fact, it's encouraged as an approach for keeping different applications or even environments properly isolated). Once you have an account ready:

  • Make sure you have your preferred region selected in the console via the menu in the top right corner. Sometimes AWS selects an unintuitive default, even after you have resources created in another region.

  • If you haven't already, you'll need to upload your SSH public key to EC2 (or create a new key pair). You can do so from the Key Pairs section of the EC2 Console.

  • Next, click the button below to launch a new stack:

    https://s3.amazonaws.com/cloudformation-examples/cloudformation-launch-stack.png
  • On the Select Template page:

  • On the Specify Details page:

    • Enter a Stack Name of your choosing. Names that can be distinguished via the first 5 characters are better, because the name will be trimmed when generating names for the underlying AWS resources.
    • Change the instance types if you wish, however, note that the t2.micro instance type is available within the AWS free tier for EC2, RDS, and ElastiCache.
    • Enter a DatabaseEngineVersion. I recommend using the latest version of Postgres supported by RDS. As of the time of this post, that is 9.6.2
    • Generate and add a random DatabasePassword for RDS. While the stack is configured to pass this to your application automatically (via DATABASE_URL), RDS and CloudFormation do not support generating their own passwords at this time.
    • Enter a DomainName. This should be the fully-qualified domain name, e.g., myapp.mydomain.com. Your email address (or one you have access to) should be listed in the Whois database for the domain. The domain name will be used for several things, including generation of a free SSL certificate via the AWS Certificate Manager. When you create the stack, you will receive an email asking you to approve the certificate (which you must do before the stack will finish creating). The DNS for this domain doesn't need to exist yet (you'll update this later).
    • For the KeyName, select the key you created or uploaded in the prior step.
    • For the SecretKey, generate a random SECRET_KEY which will be added to the environment (for use by Django, if needed). If your application doesn't need a SECRET_KEY, enter a dummy value here. This can be changed later, if needed.
    • Once you're happy with the values, click Next.
  • On the Options page, click Next (no additional tags, permissions, or notifications are necessary, so these can all be left blank).

  • On the Review page, double check that everything is correct, check the "I acknowledge that AWS CloudFormation might create IAM resources." box, and click Create.

The stack will take about 30 minutes to create, and you can monitor its progress by selecting the stack on the CloudFormation Stacks page and monitoring the Resources and/or Events tabs.

Using the demo

When it is finished, you'll have an Elastic Beanstalk Multicontainer Docker environment running inside a dedicated VPC, along with an S3 bucket for static assets (including an associated CloudFront distribution), a private S3 bucket for uploaded media, a Postgres database, and a Redis instance for caching, session storage, and/or use as a task broker. The environment variables provided to your container are as follows:

  • AWS_STORAGE_BUCKET_NAME: The name of the S3 bucket in which your application should store static assets.
  • AWS_PRIVATE_STORAGE_BUCKET_NAME: The name of the S3 bucket in which your application should store private/uploaded files or media (make sure you configure your storage backend to require authentication to read objects and encrypt them at rest, if needed).
  • CDN_DOMAIN_NAME: The domain name of the CloudFront distribution connected to the above S3 bucket; you should use this (or the S3 bucket URL directly) to refer to static assets in your HTML.
  • DOMAIN_NAME: The domain name you specified when creating the stack, which will be associated with the automatically-generated SSL certificate.
  • SECRET_KEY: The secret key you specified when creating this stack
  • DATABASE_URL: The URL to the RDS instance created as part of this stack.
  • REDIS_URL: The URL to the Redis instance created as part of this stack (may be used as a cache or session storage, e.g.). Note that Redis supports multiple databases and no database ID is included as part of the URL, so you should append a forward slash and the integer index of the database, e.g., /0.

Optional: Uploading your Docker image to the EC2 Container Registry

One of the AWS resources created by AWS Container Basics is an EC2 Container Registry (ECR) repository. If you're using Docker and don't have a place to store images already (or would prefer to consolidate hosting at AWS to simplify authentication), you can push your Docker image to ECR. You can build and push your Docker image as follows:

DOCKER_TAG=$(git rev-parse HEAD)  # or "latest", if you prefer
$(aws ecr get-login --region <region>)
docker build -t <stack-name> .
docker tag <stack-name>:$DOCKER_TAG <account-id>.dkr.ecr.<region>.amazonaws.com/<stack-name>:$DOCKER_TAG
docker push <account-id>.dkr.ecr.<region>.amazonaws.com/<stack-name>:$DOCKER_TAG

You will need to replace <stack-name> with the name of the stack you entered above, <account-id> with your AWS Account ID, and <region> with your AWS region. You can also see these commands with the appropriate variables filled in by clicking the "View Push Commands" button on the Amazon ECS Repository detail page in the AWS console (note that AWS defaults to using a DOCKER_TAG of latest instead of using the Git commit SHA).

Updating existing stacks

CloudFormation, and by extension Troposphere, also support the concept of "updating" existing stacks. This means you can take an existing CloudFormation template such as AWS Container Basics, fork and tweak it to your needs, and upload the new template to CloudFormation. CloudFormation will calculate the minimum changes necessary to implement the change, inform you of what those are, and give you the option to proceed or decline. Some changes can be done as modifications whereas other, more significant changes (such as enabling encryption on an RDS instance or changing the solution stack for an Elastic Beanstalk environment) require destroying and recreating the underlying resource. CloudFormation will inform you if it needs to do this, so inspect the proposed change list carefully.

Coming Soon: Deployment

In the next post, I'll go over several options for deploying to your newly created stack. In the meantime, the AWS Container Basics README describes one simple option.

Og MacielOn Reading and writing

Picture of 'On Writing'

This week I started reading On Writing: A Memoir of the Craft by Stephen King, a book that has been mentioned a few times by people I usually interview for my weekly podcast as something that is both inspiring and has had a major impact on their lives and careers. After the third or forth time someone mentioned I finally broke down and got myself a copy at the local bookstore.

I have to say that, so far, I am completely blown away by this book! I can totally see why everyone else recommended it as something that people should add to their BTR (Books To Read) list! First of all, the first section of the book, which Stephen King calls his 'C.V.' (and not his memories or auto biography), covers his early life as a child, his experiences and struggles (there are quite a few passages that will most likely get you to laugh out loud) growing up with his mom and older brother, Dan. This section, roughly speaking around 100 pages or so, are so easy to relate to that you can probably be done with them in about 2 hours no matter what your reading pace is. I am always captivated to learn how someone 'came to be', the real 'behind the scenes' if you will, of how someone started out their lives and the paths they took to get to where they are now.

The next sections talk about what any aspiring writer should add to their 'toolbox' and it covers many interesting topics and suggestions which, if you really think about it, makes a ton of sense. This is where I am in the book right now, and though it isn't as captivating as the first section, it should still appeal to anyone looking for solid advice on how to become a better writer in my humble opinion.

Though I one day do aspire to become a published writer (fiction most likely), and I am enjoying this book that I'm having a real hard time putting it down, the reason why I chose to write about it is related to a piece of advice that Stephen King shares with the reader about the habit of reading.

Stephen King claims that, to become a better writer one must at least obey the following rules:

  • Read every day!
  • Write every day!

It is by reading a lot (something that should come naturally to anyone who reads every day) that one learns new vocabulary words, different styles of prose, how to structure ideas into paragraphs and rhythm. He says that it doesn't matter if you read in 'tiny sips' or in huge 'swallows', but as long as you continue to read every day, you'll develop a great and, in his opinion, required habit for becoming a better writer. Obviously, based on his two rules you'd need to write every day too, and if you're one of us who is toying with the idea of becoming a writer one day (or want to become a better writer), I too highly recommend that you give this book a shot! I know, I know, I have not finished it yet but still... I highly recommend it!

Back to the habit of reading and the purpose of this post, I remember back in 2008 my own 'struggle' to 'find the time' to read non technical books. You know, reading for fun? Back then I was doing a lot of reading, but mostly it consisted of blog posts and articles recommended by my RSS feeds, and since I was very much involved with a lot of different open source projects, I mostly read about GNOME, KDE, Ubuntu and Python. Just the thought of reading a book that did not cover any of these topics gave me a feeling of uneasiness and I couldn't picture myself dedicating time, precious time, to reading 'for fun.' But eventually I realized that I needed to add a bit more variety to my reading experience and that sitting in front of my computer during my lunch break would not help me with this at all. There were too many distractions to lure me away from any book I may be trying to read.

I started out by picking up a book that everyone around me had mentioned many times as being 'wicked cool' and 'couldn't put it down' kind of book. Back then I worked at a startup and most of the engineers around me were much younger than me and at one point or another most of them were into 'the new Harry Potter' book. I confess that I felt judgmental and couldn't fathom the idea of reading a 'kid book' but since I was trying to create a new habit and since my previous attempts had failed miserably, I figured that something drastic was just what the doctor would have recommended. One day after work, before driving back home, I stopped by the public library and picked up Harry Potter and the Sorcerer's Stone.

Next day at work when I took my lunch break, I locked my laptop and went downstairs to a quiet corner of the building's lobby. I picked a nice, comfortable seat with a lot of natural sun light and view of the main entrance and started reading... or at least I thought I did. Whenever I started to read a paragraph, someone would open the door at the main entrance to the building either on their way in or out, and with them went my focus and my mind would start wandering. Eventually I'd catch myself and back to the book my eyes went, only to be disrupted by the next person opening the door. Needless to say, experiment 'Get More Reading Done' was an utter failure!

Caktus Group5 Ways to Deploy Your Python Web App in 2017 (PyCon 2017 Must-See Talk 4/6)

Part four of six in the 2017 edition of our annual PyCon Must-See Series, highlighting the talks our staff especially loved at PyCon. While there were many great talks, this is our team's shortlist.

I went into Andrew T Baker’s talk on deploying Python applications with some excitement about learning some new deployment methods. I had no idea that Andrew was going to deploy a simple “Hello World” app live, in 5 different ways!

  1. First up, Andrew used ngrok to expose localhost running on his machine to the web. I’ve used ngrok before to share with QA, but never thought about using it to share with a client. Interesting!

  2. Heroku was up next with a gunicorn Python web app server, with a warning that scaling is costly after the one free app per account.

  3. The third deploy was “Serverless” with an example via AWS Lambda, although many other serverless options exist.

  4. The next deploy was described as the way most web shops deploy, via virtual machines. The example deploy was done over the Google Cloud Platform, but another popular method for this is via Amazon EC2. This method is fairly manual, Andrew explained, with a need to Secure Shell (SSH) into your server after you spin it up.

  5. The final deploy was done via Docker with a warning that it is still fairly new and there isn't as much documentation available.

I am planning to rewatch Andrew’s talk and follow along on my machine. I’m excited to see what I can do.

Footnotes