A planet of blogs from our members...

Caktus GroupCaktus Supports the Community through Charitable Giving

Twice a year we solicit proposals from our team to contribute to a variety of non-profit organizations. With this program, we look to support groups in which Cakti are involved or that have impacted their lives in some way. This gives Caktus a chance to support our own employees as well as the wider community. For the first half of 2018, we are pleased to donate to the following organizations:

Community Health and Wellness

Part of a network of not-for-profit health system units, Duke Children’s Hospital is committed to providing excellent clinical care to infants and children. They are ranked among the top 50 children’s hospitals nationally in nine specialties and provide care to thousands of patients annually.

The Durham Exchange Family Center has been working for 25 years to reduce child abuse and neglect in the Durham area. They provide support and training for families, caregivers, and childcare professionals, as well as programming to raise community awareness surrounding issues of child abuse and neglect.

Guided by a belief that all people are worthy of equality, respect, and dignity, the LGBTQ Center of Durham seeks to create a community where all LGBTQ+ experiences are affirmed, supported, and celebrated. The center provides programming and a network of support, education, and community for improving the lives of all LGBTQ+ people in and around Durham.

Local Business and Employee Support

Caktus has been a Durham Living Wage Project supporter since their launch in 2015. The Living Wage Project alliance seeks to support Durham workers’ livelihoods by encouraging employers to pay a living wage, certifying and publicly recognizing those employers, and promoting living wages as a matter of conscience. They seek to create economic prosperity for individuals and the Durham community, working to reduce barriers to employment, providing workers with a broader set of employment benefits and protections, and promoting living wages for all employees.

The Helius Foundation offers free business coaching for necessity-driven entrepreneurs in Raleigh and Durham, including a 10-week training program, continuing mentorship, and microloans. They focus on supporting individuals building sustainable, small businesses, helping Durhamites to lift themselves out of poverty through entrepreneurship and community building. (Below is a photo of graduates from the Spring 2018 training program.)

Group photo of the Launch Durham 2018 Spring Graduating Class

Art and Culture

The Carrack empowers local artists by providing professional exhibit and performance opportunities in a volunteer-run, zero-commission space. They have been essential to the movement for a rejuvenated arts scene in Durham, especially through their efforts to support emerging, experimental, and/or minority artists, as well as hosting and funding inclusive events and projects. Caktus' Operations Assistant and talented fiber artist Liza Chabot volunteers with The Carrack. She was able to exhibit her first large-scale art installation at 21c Museum Hotel thanks to support from The Carrack. (Pictured below is a detail shot from Liza's large-scale weaving. See more on her website.)

Photo of a large-scale weaving with cream colored yarns

ARTS North Carolina is the state’s advocacy organization for the arts, working towards equity in access to the arts for all North Carolinians. Their efforts seek to unify and connect North Carolina’s arts communities while fostering arts leadership and identifying and championing the most critical advocacy issues for the North Carolina arts community. Caktus Account Executive Tim Scales has been a board member and supporter of this organization for several years. He also helped to organize, and participated in, the annual Arts Day advocacy event at the State Capitol (pictured below are the county delegates at the event).

Arts 4 NC group photo

The Museum of Life and Science is an 84-acre preserve with an interactive science park and one of the largest butterfly conservatories on the East Coast. Their mission is to create an engaging place of lifelong learning. Their outdoor exhibits form safe havens for rescued black bears, lemurs, and endangered red wolves, along with 60 other species. Caktus' Chief Business Development Officer Ian Huckabee is a current museum board member and sits on the executive and finance committees. He also works with the museum's development team on the new Earth Moves exhibit, and he's a big fan of the Hideaway Woods exhibit.

Voices is one of the Triangle’s oldest choral groups with nearly 40 seasons of performances behind it. The mission of the Voices choir is to foster, sustain, and share the art and joy of choral music and to enrich the Triangle community through excellent performances of music from diverse cultures and historical periods. Caktus Developer Dan Poirier has lent his baritone voice to the group since 2004 and has served two terms on the board. (Voices members warm up before a concert in Edinborough, pictured below.)

Voices choir members

Alley Cats and Angels

Alley Cats and Angels is an all-volunteer, foster home-based, cat rescue dedicated to helping stray, abandoned, and feral cats. Ultimately, this organization seeks to reduce the overall number of homeless cats in the Triangle through their adoption and spay/neuter assistance programs. Caktus' Lead Developer Karen Tracey has spent nine years as a dedicated foster care volunteer with this organization. She regularly brings foster litters to the Caktus office for socialization, and several Cakti have adopted adorable kittens they met through this program! (Pictured below is one of the many cat families rescued by Alley Cats and Angels.)

Gray cat with several kittens

A History of Giving

This round of donations marks four years since we first began administering our Charitable Giving Program in 2014. We are so pleased to be able to continue supporting our local community in this way. The program also provides an opportunity for us to learn more about our employees while championing the communities they contribute to and the causes they care about outside of Caktus.

Caktus GroupOutgrowing Sprints: A Shift from Scrum to Kanban

The problem

The Scrum and Kanban frameworks are tools for development teams, and as with any job, it’s crucial to pick the right tool for the situation at hand. Caktus teams have been using Scrum for over two years, but one of my teams started to bring up in retrospectives that sprint deadlines felt arbitrary and were irrelevant to anyone outside the team. They also had to do some mental gymnastics to plan sprints that were so brittle they were likely to fall apart due to restricted monthly project budgets. As a result, I started to ask myself some difficult questions.

Why didn’t the team understand that sprint deadlines were there for a good reason? Sprints are essential when work has to be done iteratively, and a shippable product increment must be demonstrated to stakeholders at regular intervals. But this team was working on maintenance projects. They were not developing large, new features, but rather they were fulfilling small client requests or fixing an odd bug here and there. So, couldn’t maintenance work be done iteratively too? Well sure, but did it have to be done in sprints? The team kept bringing up sprints as a problem, so we needed to address it. I thought, what if the team could continue to work iteratively, deliver value often, AND get rid of sprints?

Kanban seemed like the out-of-the-box answer to the team’s problem. We could continue to be Agile by delivering value and gathering feedback iteratively, and we could also get around our restrictions with limited monthly hours and seemingly arbitrary deadlines that were creating friction.

The solution, maybe?

After reading about Kanban and what it would take to transition from Scrum successfully (check out Kanban and Scrum — Making the Most of Both by Kniberg and Skarin), I did what any Scrum Master would do — I brought it to the team. I called a team huddle and introduced the idea of making a Big Change to our process, in light of the issues we had been facing.

The idea was received positively as something we should explore, but it wasn’t yet clear to everyone how this new system might work in practice. I could see that not everyone was as excited about the idea as I was, so I made sure to emphasize that the decision was up to the team as a whole and that we wouldn’t make the change unless we all agreed.

We planned to discuss it in depth during our next sprint retrospective with the goal of outlining a potential transition plan, which we would then follow with a vote on whether we should proceed.


The team talked at length about how Kanban works and how it could help us, and we listed some decisions that we would need to make before the transition:

  • Design our Kanban board: the first “Rule of Kanban” is to visualize the workflow, using columns and cards on a board. We had already been using a digital sprint board with columns representing workflow states, but realized that we would want to take this opportunity to update it.
  • Assign limits for each column: the second “Rule of Kanban” is to limit work in progress by restricting the number of items allowed in each column, in order to encourage finishing tasks before starting new ones.
  • Evaluate our regularly-scheduled Scrum events: Kanban does not prescribe any specific events or meetings, but the general advice for a successful transition was to “start with what you have” so we wanted to feel free to make some changes but stay close to what we already had in place until we could get a feel for the new process.
  • Evaluate our Scrum roles: Unlike Scrum, Kanban does not prescribe any specific roles, and although Product Owner and Developer roles could easily keep their functions on the team, the fate of the Scrum Master was less clear. Had I talked my way out of this team inadvertently? I openly gave them the freedom to vote me off the team if they thought my role was not needed anymore, hoping they would choose to keep me so I could continue coaching them through this change.

There were also many outstanding questions that we needed to answer together, such as:

  • How would we plan ahead for using but not exceeding monthly project hours?
  • How would we ensure that tickets didn’t stagnate without the time pressure of a two-week sprint?
  • How would we measure velocity without sprints so we could let clients know when we expected work to be complete?
  • Did tickets all need to be the same size? And if that was the case, would the time investment in ticket management be worth it?
  • Did we need some formal training in Kanban, or would we be able to rely on what we had learned on our own?
  • How would we handle a new greenfield project without the structure of sprints?

Making the switch

After much discussion, the whole team decided that we should make the change. Although some members of the team were still hesitant (including the Product Owner), the conclusion was that we would try Kanban for a month and evaluate how it was going during our retrospectives, and go back to Scrum if it wasn’t working.

Ultimately, a bigger and better ticket board was designed collaboratively, with initial work in progress limits per column. The team kept all their Scrum meetings except for backlog grooming, and repurposed the planning meeting to include grooming of upcoming work. I was relieved that the decision to keep daily standups and regular reviews/retrospectives was unilateral, as these are important in any Agile context.

Perhaps the most challenging hurdle to clear before we felt ready to leave Scrum behind was cleaning up old work in progress. There were many forgotten tickets that had been left unfinished, languishing in the backlog, usually started during a sprint and never prioritized for the next one (another point to Kanban’s high visibility into all work in progress). The Product Owner worked with the team to clean up the entire backlog, across multiple clients and projects, so that we could start fresh.

The team also decided to keep me on, changing the name of my role to Kanban Koach. Grateful for the chance to continue working with them, I reciprocated by setting up a Lego training simulation (modifying Lego4Scrum to suit my needs) so we could get a feel for how this new system would work in practice (pictured below). This activity was a fun way to help everyone acclimate to the change in an informal context.

After our final sprint retrospective, we felt confident that we were ready to venture into this brave new world of Kanban. During our scheduled planning meeting the next morning, we just … didn’t plan a sprint. We retired our sprint board, reviewed the highest priority tickets that the Product Owner had pulled into the Ready for Development column of our new Kanban board, and then the team got to work.

It was contagious

Thanks to the continued team retrospectives, we were able to make tweaks to keep improving our process as we adjusted to working without sprints. The team liked that they had total visibility into what everyone was working on rather than only on sprint tasks. They liked leaving behind the artificial pressure of sprint deadlines while still pushing themselves to deliver value regularly to their stakeholders. They also liked having the ability to be completely responsive to new requests: clients didn’t need to wait until the next sprint to have their priorities addressed, instead we were able to get started on new work as soon as we finished what was currently in progress.

The team liked Kanban so much that word spread to the other development teams. We were asked to do a lunch talk for the rest of Caktus so everyone could hear about our experience. Some of us were later invited to sit down with other teams to answer specific questions on how to go about making the transition. One team went ahead and made the switch a couple months after the first team. Another team has changed their process to a hybrid method, incorporating aspects of Kanban on top of sprints.

Some advice

If you and your team are considering switching from Scrum to Kanban, perhaps the best advice I could give you is to start by examining the reason(s) why you are considering the change. It’s easy to fall into the trap of believing that changing methodology will be a silver bullet and solve all your problems. However, if your reasoning comes back to the Agile values and putting people over processes and tools — you should go for it! Scrum and Kanban are both wonderfully effective tools in the right context, and experimenting with changes to your process can be both fun and edifying.

If you have made this switch or are thinking about it, we’d love to hear from you in the comments below!

Caktus GroupLessons from the Great Failure of 1858 (PyCon 2018 Must-See Talk Series)

This is the third post in the 2018 edition of our annual PyCon Must-See Series, which highlights the talks our staff especially loved at PyCon. While there were many great talks, this is our team's shortlist.

A talk I remember well from PyCon 2018 was Don't Look Back in Anger: Wildman Whitehouse and the Great Failure of 1858. By describing the process and mistakes involved with building the first transatlantic telegraph, Lilly Ryan, a software and systems engineer and former historian, goes over some of the major pressures, events, and disagreements made during the project. Specifically, she describes:

  • Team conflict
  • Time pressure
  • Hero mentality

Moreover, she describes the importance of the following values for development teams:

  • Treating feedback sensibly
  • Open-mindedness
  • Self-reflections and post-mortems

This entertaining 30-minute talk provides multiple examples of ways in which tech projects can go awry, and how to avoid the same mistakes in the future. I hope to practice the values Ryan mentions, and help my team to practice them as well. The advice from Ryan is helpful and relevant for anyone working on or managing a technology project or team.

Caktus GroupMake ALL Your Django Forms Better

Website experiences need to be consistent as much as they need to be well thought out and aesthetically pleasing. Structure, visual design, user interactions, and accessibility concerns are among many considerations that go into building quality websites. While achieving consistency of experience and implementation is an essential goal of web development, efficiency of execution is also very important. An efficient workflow means this consistent experience doesn’t require redoing work across the site.

This post is about efficient consistency when building forms across your site.

Django helps you build forms, but one size doesn’t fit all. It can render your forms on its own, or you can take more control of the form markup in your HTML. When Django renders your forms, you adhere to its defaults and assumptions. When those don’t match your site’s designs or other requirements, you have to do it yourself. But you can squeeze more flexibility out of Django’s own form rendering. This lets you match your form styles and implementations site-wide without losing the valuable tools Django has out-of-the-box to make form rendering easier.

Why change what Django does?

Maybe you’ve always been fine using the forms exactly as Django renders them for you. Or, maybe you’ve been building custom forms in Django for so long you don’t see what’s wrong with providing your own widget classes or adding the extra attributes to your fields. And, of course, you can get a lot of customization out of simply re-styling the form pieces in CSS after Django has done its rendering, so you have lots of options for flexibility.

There have been a lot of situations where I need to change how lots of forms are rendered, usually across an entire site:

  • Accessibility requirements stipulate aria-required and other attributes
  • Design or CSS frameworks necessitate changes to an input’s markup
  • Design or CSS frameworks necessitate changes to all inputs, like common attributes or even common event triggers
  • I need to replace the traditional file input with a smarter widget
  • I also need to replace built-in date and time inputs

None of the above are difficult to account for. The problem we’re looking at is applying this list of concerns, and more, to all form fields on an entire site, and that often includes forms that come from third-party Django apps where you don’t even have access to change the forms themselves. (Short of forking all your third-party apps, which is a really crappy proposition.)

These situations are also increasingly difficult to deal with on existing sites, because the larger the site gets, the more forms it has.

Ideally, make the changes once

Django and Python have some assumptions and guidelines about code. One of the most important ones is removing verbosity and redundancy. The current approaches do neither, so let’s find a better way.

We’ll look at two things we can do. The first is the larger impact on our flexibility. The second is a smaller, but also useful method of customizing form defaults.

Django widget templates

As part of the 1.11 release of Django, widgets are now rendered by templates, just like everything else. This gives you the opportunity to create your own widgets much more easily. But, it also gives us an opportunity to override the templates Django uses for the built-in widgets it comes with.

Obviously, this advice assumes that your project has been upgraded to the 1.11 release of Django or higher.

There is a new type of component in a Django project, the Form Renderer. You can imagine this is very much related to what we’re trying to do! There is a setting to select which Form Renderer you use, and Django itself comes with three choices, but you could implement your own. For our purposes, one of the built-in renderers will work, just not the default.

The default form renderer is the DjangoTemplates renderer, which sounds like it would do exactly what we need, but does not. This renderer uses its own template engine, separate from your project settings. It will load the default widgets first and then look in all your applications for widget templates for all your custom widgets.

We’ll use the TemplateSettings renderer, instead, which loads templates exactly as the rest of our project is configured.

FORM_RENDERER = 'django.forms.renderers.TemplatesSetting'

Now that the form rendering can be configured with regard to templates, let’s look at some settings that will load our widget templates in the order we want. Some of this can be changed to adapt to your needs, but this is what worked for our project:

'DIRS': [

'loaders': [

We’re telling Django to first look for templates in our project’s own templates directory, which is where we’re going to put our widget templates. You could also override widgets in an app, but for overriding the defaults I think it is appropriate to do so in a global context.

One of the important overrides we made was to change how attributes inside the input tags are rendered. All the default widget templates exist in django/forms/widgets/ under any templates directory they’re being looked up in, so our project has the template project/templates/django/forms/widgets/attrs.html.

Of course, we aren’t going to override all the default widget templates. Although, you could, if you wanted to! For the templates we don’t override, we still want the renderer to look in Django’s form app. To do this, we need to add django.forms to our INSTALLED_APPS list, but we put it at the end, so that any overrides that might exist inside other apps can be found first and the defaults are always the last ones used.


    'django.forms', # must be last!

What did we do by overriding these widget templates?

  • We added a small onchange handler to toggle a class on any input when it has a value, so our CSS can target empty or non-empty inputs. Very useful!
  • We added accessibility tags to all our inputs without exception.
  • We changed how our radio and checkbox lists were rendered to remove the colon in the labels because that didn’t match our design.

The new Form Rendering system in Django 1.11 adds a lot of control we didn’t have before, and it was really fun to explore it and see how it could help us. Overall, I’m extremely happy with the result.

A trick for a little more customization

Django widget templates are a supported feature, and there are lots of other features in the framework that make tailoring your setup to a project’s needs really easy. That said, what do you do about things you can’t customize, but have a good case for?

As an example, let’s look at one more thing Django forms do out-of-the-box. When Django renders a form for you, it renders a series of both labels and fields. We’ve talked about customizing the fields, but the labels are actually external to the widgets and their templates.

I’m going to use a silly example, but a real one. In our design, we did not like the colons Django includes as a suffix of every label. Of course, there were lots of ways around this. I could create my forms with an empty label_suffix option, for example. But, if you’ve read this far, you’ll know that doing anything more than once is too often for me.

There is no setting you can use to change the default label suffix globally across a project. But there is a trick you can employ to make the base Form class that all your forms derive from use a different default: just replace the base Form class with one that does what you want!

from django import forms

class BaseForm(forms.Form):
    def __init__(self, *args, **kwargs):
        kwargs.setdefault('label_suffix', '')  
        super(BaseForm, self).__init__(*args, **kwargs)

forms.Form = BaseForm

This is a roundabout way to accomplish our little goal, and I admit it is a trivial goal. You might find other reasons to do this, with other changes that you want to make across all the forms on your site. A fair warning is due, however. This is monkeypatching, it is generally frowned upon, and you have to be careful with changes that will affect code you didn’t write.

I think this is a light and safe use, but be careful if you do anything more with it!


We haven’t gone over anything complex here. Using some simple tricks and an easy application of new features supported by Django can go a long way towards creating great form experiences across your site.

Hopefully, this helps you work faster with even better results for you, your clients, and your users.

Caktus GroupStories of Security (PyCon 2018 Must-See Talk Series)

This is the second post in the 2018 edition of our annual PyCon Must-See Series, which highlights the talks our staff especially loved at PyCon. While there were many great talks, this is our team's shortlist.

I saw a lot of great talks at PyCon 2018, but Ying Li's keynote was one of my favorites. Li is a security engineer at Docker where she works extensively with Python. Her talk focuses on information security, and she suggests that everyone who works in technology and software should care about security.

During her presentation, Li shares an amusing children's book that she’s planning called The Professor of 0's, which describes the journey of a software development team as they learn to properly secure their web application against vulnerabilities. The children’s book illustrates Li’s points on security in a way that’s accessible and enjoyable. The book is a much appreciated, light-hearted approach to a topic that is both serious and important. Be sure to watch the video, to see how the book helps Li to get her points across.

Later in her talk, she also makes an intriguing analogy between how the security community is addressing common security vulnerabilities and how the medical community attempts to prevent SIDS (Sudden Infant Death Syndrome). Li explained that as a new mother, she was educated about a “checklist” of sorts that she could use to help prevent SIDS. She heard the information not just once, but at nearly every interaction with her various healthcare providers. That consistent (and persistent), simple-to-follow message, she argues, has helped to save the lives of thousands of infants.

Just as the medical community created simple guidelines that parents were willing and able to follow, Li sees the security community providing simple tools that developers are willing to use, in order to easily address common security vulnerabilities like CSRF attacks (see the Caktus blog post on common website vulnerabilities for information about CSRF and other common attacks). Those steps are now reaping benefits as CSRF attacks have fallen off the Open Web Application Security Project (OWASP) Top 10 vulnerabilities list for the first time in over a decade. Li attributes this improvement to the fact that frameworks, such as Django, have added tools to prevent CSRF attacks, and to the fact that developers are using those tools. She'd like to see those types outreach and education programs take down the other vulnerabilities on the OWASP Top 10.

Li’s talk inspires me to review our practices at Caktus to see what improvements we can make, which will make hardening our systems simpler without imposing undue work on team members.

Caktus GroupWhy We Love TestBash (and You Will, Too!)

Mirror, mirror, on the wall - what's the best test conference of them all? That’s the question that many of you may be asking yourselves when trying to decide which conference to attend. Well, we believe we have discovered what may be one of the top contenders for best software testing conferences: the Ministry of Testing’s TestBash.

Each year, sessions of TestBash are held throughout the UK and in other locations around the world, including the US, Germany, and Australia. This conference was created and is run by testers, and we think that is part of what makes TestBash so special. Its main focus is helping testers learn from each other’s experiences. TestBash is far more than just a conference; it has also developed a community of testers that inspire and motivate each other to become better at what they do.

In Fall 2017, Caktus QA Analyst Gerald attended TestBash Philadelphia. His enthusiastic response encouraged Robbie to attend TestBash Brighton in March 2018, and Sarah to attend TestBash Netherlands in April 2018. Having experienced three separate TestBash conferences, we reviewed our experiences together and discussed our key takeaways.

What did you like most about TestBash?

Gerald: “The atmosphere was different than most of the other conferences that I’ve attended. People weren't afraid to speak to each other and share their experiences. I mentioned my interest in ATDD (acceptance test driven development) to an attendee at the end of the conference and he was able to connect me with someone else who had experience with it. The speakers also connect with the attendees on a personal level; they stick around for the whole conference instead of leaving once they’ve given their talk. I also loved the hands-on workshops where I had the opportunity to learn things that I could apply immediately. Also the food was really good.”

Robbie: “The conference felt like it was for testers by testers. TestBash has a sense of community that I have not experienced at other conferences. Attendees went out of their way to talk to each other, and learn from everyone at the conference, not just from the speakers. I also really enjoyed the speakers. Most conferences the speakers tend to use lots of buzzwords and lingo but that was not the case at TestBash.”

Sarah: “Plus one to the food! The meals and snacks were amazing. More seriously, the intimacy and openness was very refreshing. Since the conference is confined to a small number of people and a small space, you feel more like individuals than sheep being herded through halls and rooms, like you can at really large conferences. You also get to feel more comfortable with the other people, and it makes it easier to talk and share experiences and knowledge.”

Who do you recommend attends TestBash?

Gerald: “I’d say that TestBash is geared towards testers of all kinds, whether that be manual, automation, or even API; but if you aren’t a tester that shouldn’t discourage you from attending. The talks are structured so that although they are coming from the perspective of a tester, the context relates to other roles involved in software development as well. I even met a developer from Sauce Labs who told me he attends just so he can learn how he could work better with the testers on his team.”

Robbie: “Anyone interested in testing, both experienced and inexperienced. A few of the speakers even talked about TestBash as being their intro to testing and that they keep coming back.”

Sarah: “Honestly, I’d recommend anyone in software development attend TestBash. It will be most valuable for testers of all types, but still provides value to developers and managers. Any role in software development can benefit from the insight into the testing culture and community provided by TestBash.”

Any tips for future attendees to get the most out of TestBash?

Gerald: “If you are on Twitter, watch the Twitter hashtag handle for your specific conference. People at the conference tweet throughout the whole event, even the speakers. Attendees will sometimes post links to valuable resources or additional articles related to the talks. A lot of people used Twitter to ask questions and quickly connect with other attendees. In fact, I connected with a few people by simply asking a question with the hashtag: ‘Does anyone here have experience with ATTD #TestBashPhilly?’ Also I recommend that you stay for the social event! This is when people have more time to talk about their own experiences in testing and it gives you a chance to have deeper conversations. I connected with a few people that I’ve stayed in touch with after the conference was long over.”

Robbie: “Interact with the other attendees, get involved with the community both during the conference and at the social meet ups that happen around the conference.”

Sarah: “Stay the full day, all the way through the 99 second talks! If possible, also take a workshop beforehand; you’ll meet conference attendees in an even smaller, more intimate setting that way.”

What was your favorite talk at the TestBash you attended?

Gerald: “‘How to benefit from being uncomfortable’ by Cassandra H. Leung. This talk was about intentionally putting yourself into uncomfortable situations as a way to overcome a fear of being uncomfortable. This was a great talk because I was able to apply it to my role as a tester when it comes to asking questions about technical things that I may not understand even after it’s been explained. As someone who dreads public speaking I was motivated by this talk to force myself to give my first talk in front of our company.”

Robbie: “‘Experiences in Modern Testing’ by Alan Page. This talk was about the importance of embracing change, moving beyond creating/executing tests, and building a culture of quality. Alan, and many of the other speakers, also touched on the importance of communities of practice and continuing to learn throughout your career.”

Sarah: “‘Holding Space: Making Things Better by Doing Less’ by Maaret Pyhäjärvi. Maaret talked about how to empower and inspire teams by letting team members take action themselves, rather than working yourself to the bone. I trained as a Scrum Master for a bit several years ago and her talk struck a chord with me since it seemed to be what true ‘servant leadership’, or leading a team by serving them, is all about. It was very interesting to hear about it from a testing perspective.”

What did you take away from TestBash that you are using right now at Caktus?

Gerald: “Since attending the conference I have introduced the team to pair testing. For any opportunity that seems to fit, I pair up with a developer on the team and we test specific features together. As we’re testing we exchange thoughts, which generates extra test scenarios that may not have been covered previously.

"I also picked up a deck of TestSphere cards from the conference which highlight general quality aspects, testing techniques, and patterns that can be applied to any software project. One of the ways that we use these cards is for Risk Storming. Although not required, these TestSphere cards can be used to drive a risk assessment activity. The process involves reading through the cards to determine if any of the principles or techniques can be applied to whatever project we are working on. The cards have helped generate discussion about potential risks around the projects and as a team we think of ways that we can mitigate them.”

Robbie: “Formalizing exploratory testing, not just diving in head first but planning test sessions and taking more detailed notes as I test.”

Sarah: “We’re starting to use the 99 second talk format as a way to practice public speaking and peer feedback. We all have TestSphere decks as well.”

Would you attend TestBash again?

Gerald: “Yes, I plan to make this a conference that I attend every year.”

Robbie: “Already signed up for notices on when and where TestBash will be next year.”

Sarah: “Absolutely, and I hope to next year!”

TestBash has been a great discovery for the QA team here at Caktus. The quality of the talks, the intimate communal atmosphere, and the friendliness of the people make for an excellent conference experience. We hope to see you at the next Ministry of Testing event! In the meantime, you can read more on the Caktus blog about software quality assurance and how we prioritize defects.

(Editor’s Note: Neither the authors nor Caktus have any connection with the TestBash, other than as attendees. No payment or other compensation was received for this review. This post reflects the personal opinion and experience of the authors and should not be considered an endorsement by Caktus Group.)

Vinod KurupAutovacuum not running

OK, this is a debug session in progress, so don't expect a nice solution at the end. We're working on a project that does analysis of some public voter registration data. The DB is hosted on Amazon RDS and I've been perplexed by how poorly queries are performing there, despite the tables only have about 10 million rows. Simple queries are taking many minutes, which is orders of magnitude slower than my laptop.

Mark suggested running 'VACUUM ANALYZE', which I didn't think would help because my understanding was that the autovacuum process in PostgreSQL would be taking care of that on a regular basis. These queries had been slow for days with no recent inserts or updates, so certainly autovacuum should have caught up to them by now. But, I tried it anyway and lo and behold:

```sql db=> select count(*) from voter_ncvoter;


12336571 (1 row) Time: 315777.051 ms db=> vacuum analyze; VACUUM Time: 11377035.096 ms db=> select count(*) from voter_ncvoter;


12336571 (1 row) Time: 4300.107 ms ```

Woah, that worked! Sure, it took 3+ hours to run ANALYZE, but wow. So, why isn't autovacuum automatically doing this for us. (I mean it has the phrase 'auto' in its name!!!)

I've found this great article on autovacuum basics which led me to do this query:

```sql db=> select relname, n_live_tup, last_autoanalyze from pg_stat_all_tables where relname like 'voter_%';

   relname       | n_live_tup |       last_autoanalyze

---------------------+------------+------------+------------------------------- voter_changetracker | 306689271 | 2018-05-05 04:59:08.503876+00 voter_filetracker | 41 | 2018-05-13 02:00:47.802633+00 voter_ncvhis | 0 | voter_ncvoter | 12336616 | 2018-05-06 13:20:30.073426+00 voter_badlinerange | 404 | 2018-04-10 05:44:39.949193+00 (5 rows) ```

So those 2 large tables haven't been ANALYZEd in weeks, despite the fact that we import a 10 million row CSV once every week. This is the end of my debugging road, for now. Hopefully, I'll figure out what's going on.

Caktus Group3 Common Form Testing Issues (Plus 1 Helpful Tool)

Forms are something that I find myself testing frequently, whether it's an e-commerce checkout page or a new model in the Django admin. The challenge of forms is that users will often enter things that may not have been accounted for when the form was created.

For example, they might enter 罗比 for their name when you were only expecting users to enter names with letters of the Roman alphabet. If the unexpected characters cause errors, the user may not be able to continue using your site. Issues with user-generated inputs can be discovered before they ever happen to a user by testing with better data.

Some people will use test data similar to Mr. Tester 123, Fake Street, City, NC 12345, test@example.com. This kind of fake data can be useful if you need to remember what you used or be able to pick it out of a list of real data. The problem with using this, or other similar data, is that it is unlikely to uncover any bugs in the system because it does not match real-world uses of the system.

The type of data you use will depend on the environment in which you are testing. You may be able to use real addresses if you are in a testing environment that is isolated from other systems. However, if the environment is not sufficiently isolated you may not want to use something like Mr. President, 1600 Pennsylvania Ave, NW Washington, DC 20006 (an address that I have seen used for testing). When using real addresses you risk real-world events. You wouldn’t want to have to answer questions from law enforcement if a strange package is shipped to the President by accident.

One solution to ensure nothing is sent from a test environment is to use data that is easily identifiable as fake. When doing this, you need to be sure to still use data that is comparable to what a user may actually enter so that you do not miss important bugs. You also do not want to use John Doe, 123 Fake St, City, NC 12345 for every test. A few issues with using this or a similar address for all your tests jump out right away.

Zip code

Zip codes are one of the first problematic fields that come to mind when I think of test data. This is a field that people tend to make different assumptions about based on where they are from.

People from the US often assume all zip codes are only 5 digits (the standard in the US), and that they are always called “zip codes”. What users actually enter will vary widely based on where the users of the site are located. Some potential valid zip/postcode combinations include:

  • Numbers only
  • Numbers starting with a 0
  • 3 digits
  • 10 digits
  • Single character
  • A mix of letters and numbers
  • Spaces in the zip code

Of course, some countries do not use zip codes at all. If the zip code field is limited to only accept five digits, a large portion of users may run into issues.


The next piece of data in John Doe, 123 Fake St, City, NC 12345 that jumps out as problematic is the first and last name. An important thing to keep in mind with names is that they can vary in length, characters used, and the order in which first names and family names are used. They can also include characters from other alphabets. What happens if a user enters the name Drop Table or Null? Will Christopher Null never be able to submit the form because the site thinks the field is blank, or will a database table be dropped because of Drop Table?

Unexpected inputs

The Hamlet test is one of my coworker’s favorite tests for unexpected inputs in text fields. It consists of entering the entirety of Hamlet into a text field, all 135,013 characters, with the spaces removed. This is a good way of rooting out bugs related to text length and formatting. While you may not need to allow the user to enter so much text without spaces, you do need to be sure that any resulting errors are handled in a user-friendly manner.

While test cases may not always call for entering unexpected data, including them in testing can uncover or answer additional, previously unconsidered questions. For example:

  • What languages should users be able to enter? Does this site have encoding for non-Latin alphabets or other languages?
  • If a user enters characters from a language the site does not support, how is it handled? If a user enters Arabic, that is written from right to left, will that cause some unexpected behavior?
  • What will happen when a user enters an emoticon in a text field?

There are many more to consider, but these are a few I have encountered.

Keeping track of test data

People often fall into the habit of using the same data for a field each time they test. To make keeping up with test data easier I have used text documents, spreadsheets, and sticky notes in the past. There are other tools to make this much easier.

I recently came across Bug Magnet, a helpful tool for testing user-generated inputs. Bug Magnet is a plugin for Chrome and Firefox that organizes valid and invalid data in an easy to use fashion.

Now, instead of referencing my list of possible zip codes from a doc or sticky note, I use this plugin. Right-clicking on the field that you are testing opens a submenu with many options which can be problematic if not properly addressed.

A screenshot of the options menu for the Bug Magnet testing tool.

Bug Magnet includes options for a variety of fields. If you do not see something you need, one of the other great features is the ability to add your own configurations. I recently added date formats to mine using the directions found here.

Ready to test?

When using test data, make sure to use what is most likely to uncover bugs, not just what is easy and memorable. Also, make sure that the data you use fits the testing you are doing (you do not want to make news for accidentally shipping something to the White House).

Keep in mind users will enter unexpected things if given the opportunity. Testing fields with a wide variety of data, and making it easier for yourself by installing tools like Bug Magnet, can go a long way toward improving user experience if they enter an input that your system doesn’t support.

For more tips check out our other QA & testing blog posts, or if you know of any other tools/extensions to make testing easier let me know in the comment section.

(Editor’s Note: Neither the author nor Caktus have any connection with the Bug Magnet developers, other than as users. No payment or other compensation was received for this review. This post reflects the personal opinion and experience of the author and should not be considered an endorsement by Caktus Group.)

Caktus GroupLove Your Bugs (PyCon 2018 Must-See Talk Series)

Welcome to the 2018 edition of our annual PyCon Must-See Series, highlighting the talks our staff especially loved at PyCon. While there were many great talks, this is our team's shortlist.

My must-see talk this year was “Love your bugs” by Allison Kaptur. Fixing bugs can seem like a tedious process, but Allison demonstrates several techniques on how to adjust your frame of mind to make bug fixing and, more generally, problem-solving a useful process for you.

Allison loves bugs. She walks through several examples of complex bugs she encountered while working at Dropbox on their desktop client. As she dives into detail of the scenarios that led to the source of the bugs, you see that the investigation process, like solving a mystery, is part of the fun. Additionally, checking your assumptions and getting quick feedback while debugging is a great way to learn.

She talks about debugging requiring a growth mindset, based on research by Carol Dweck. Having a growth mindset, Allison says, frames intelligence as something that you can change or increase by exerting effort, while a fixed mindset is a fixed quantity and effort is not apart of the equation. At the Recurse Center, where she helped train developers, demonstrating a positive growth mindset was important to her process.

Her tips are to reframe praise and success (“that went really well since I worked hard”), reframe failure (develop lessons learned from failure), and celebrate successes. In the end, struggling through challenges and encountering bugs are expected, working hard and fixing bugs is part of the process, and learning during this process is a great way to grow.

Allison is a naturally good speaker and I highly recommend her PyCon 2018 talk.

Caktus GroupPyCon 2018 Recap

Making connections

Before the conference, our team listed “making connections” as one of the main reasons to attend PyCon. We certainly did that, welcoming visitors to the booth and catching up with friends old and new.

Ultimate Tic Tac Toe returned with an upgraded AI to play against. It was a tough one to beat this year! We had a couple of people achieve victory, though.

Winner of Ultimate Tic Tac Toe in front of the Caktus booth at PyCon 2018.

We also gave away two Raspberry Pi 3 kits to lucky winners.


Learning from fellow Pythonistas is another reason our team loves going to PyCon. The keynotes were highlighted as particularly engaging, although there were many mentioned by talk attendees on Twitter. Here are a few:

Look out for the 2018 edition of our PyCon Must-See series, coming soon!

PyLadies auction

The PyLadies auction sold out this year for the first time. Bidding was hot for items ranging from Tesla coil music-makers to cross-stitch samplers and limited-edition prints.

The sold-out room at the PyLadies auction.

Cakti love to support the larger community and this year we were excited to donate an item to the PyLadies auction. This luxurious handwoven scarf, created by a member of the Caktus team, will let its new owner represent Python in style. Thank you to the buyer for supporting PyLadies!

Python-themed scarf, hand-woven by Elizabeth Chabot for the PyLadies auction.

Long live Python

It was another great year at PyCon! Thanks to all of the Python community for participating, and extra thanks to the organizers and volunteers. We appreciate all that you do!

Caktus GroupAvoiding the Blame Game in Scrum

The words we use, and the tone in which we use them, can either nurture or hinder the growth of Scrum teams. This is especially true for teams that are new to Scrum or that may be transitioning into a new Agile methodology.

To understand how what you say can have an impact on your team, let's dig a little deeper and review what Scrum is all about. According to TechTarget, Scrum is defined as “a framework for project management that emphasizes teamwork, accountability and iterative progress toward a well-defined goal.” Communication is an important part of this process, via team meetings or events involving face-to-face communication, or video or audio conference calls on teams that have remote team members. The purpose of these events is to provide structure within each sprint.

First, the team meets before the sprint begins, to plan what they will work on (sprint planning). Throughout the sprint, team members meet each day to provide updates on sprint work (standups). At the end of each sprint, the team meets to discuss how the sprint went (sprint retrospective).

In these meetings, there are opportunities for dialogue between team members. Ideally, the dialogue is positive and constructive. When it is not, it can be detrimental and harmful to the maturation of the team.

The Blame Game

Regardless of the intent or context of the things we say to one another, the tone is usually the first thing that is noticed. As a result, we instinctively tend to respond to the tone and not the context of what was communicated.

Tone is vital when effectively communicating with peers on a Scrum team, especially when things don’t go as well as planned. When a failure occurs, whether it is missed deadlines or critical bugs that slip into production, our first instinct is to determine the reason why. If we are not careful, this can often lead to placing the blame on others. It’s easy to forget that we should be functioning as a team and we may end up blaming individuals for a failure when actually that weight falls on the team as a whole.

Blaming other team members for failing on a project can destroy trust between individuals, which will inevitably ruin relationships. Think about what blame does: when a person is criticized, especially publicly, it brings on feelings of shame and humiliation. These feelings may cause an individual to doubt themselves or doubt their team members. Being blamed can destroy a person's confidence, which plays a huge role in how productive they can be. Like the flu, blaming is also contagious and has a tendency to spread.

Take the following example: Asking another member of the team, “Why didn’t you get that work done?” versus “Why weren’t we able to get this work done?”

When someone is blamed and isolated from the team, they immediately respond in one of two ways: they become defensive and look for someone else to pass the blame to, or they shut down and emotionally put up a wall that blocks communication.

As a result, the growth and maturity of a team can be hindered. Team members will be less willing to take on tasks, for fear of being blamed. Finger pointing within a team degrades the trust between individuals, and less trust leads to less communication. Team members may be less likely to collaborate which slows the efficiency of the team.

Team Mindset

Being part of a Scrum team involves holding yourself and each other accountable. This should not be confused with placing all the responsibility on one person. The Scrum Guide states that “Scrum recognizes no titles for Development Team members, regardless of the work being performed by the person.” Developers are responsible for more than just writing code, quality assurance (QA) analysts are responsible for more than just testing, and the same goes for other roles on the team.

For example, when involved in the early stages of development, a QA analyst can raise questions about features and functionality. Asking these questions helps flesh out functionality and design issues that may have slipped under the radar. This is one way that QA can contribute to the team outside of testing. Having this team mindset, versus an individualistic mindset, is important to the team and project success.

The following are a few examples of the benefit of a team-driven mindset.

Example 1

Let's say you are in a daily stand up and a developer gives the following update:

“Yesterday I lost a lot of valuable time trying to figure out what the business analyst or product owner wants us to implement. The documents they provided aren’t very clear and lack the detail we need before work can start.”

Notice how this dialogue points the finger at the business analyst and product owner, as if the documentation provided is a burden for the developers and that “they” should do better at providing clearer documentation.

Now, let's see what a more Scrum-friendly update could be:

Dev: “Yesterday I spent some time looking over the acceptance criteria for the upcoming stories. There are a few things that I could use some clarification on, so I’ll set aside some time today to talk about them with the BA and PO. Maybe if we discuss these things together we can develop a clear understanding that will allow us to start implementing some of these tickets.”

The second version still highlights the issue, but it doesn't point fingers. It also references a solution and plan of action to resolve the issue as a team, with the developer and other team members working together, as opposed to placing blame for a lack of detail. As a result, the business analyst or product owner will learn how to provide better documentation for the developers, and won't feel singled out.

Example 2

During a retrospective the QA analyst gives the following feedback:

“One thing that went wrong in this past sprint was the developers waited until the last minute before any the tickets were ready for QA. As a result some things did not make it through testing which is why we did not complete all our sprint goals.”

Compare the above statement with a statement that addresses the same concern but does not point a finger at an individual or group:

“In the last sprint, some features were delayed because development took longer than we anticipated and we ran out of time to test. I think we could have spent a little more time discussing the details of the tickets that we committed to. In this next sprint, perhaps we should take the time to discuss in detail what it will take to implement and test each story. If we have a clearer understanding of the ticket we are picking up, that may prevent us from over-committing on sprint work.”

Observe how the second version still states that there was an issue of the team not reaching sprint goals, but doesn't point the blame at the developers. It instead describes what the issue was and also provides a solution that involves everyone.

As a result, the developers are aware of the need for more caution in estimating how long it will take to implement things to allow for adequate QA testing. During the next sprint planning, hopefully everyone will be open and involved with discussing how much effort will be required in implementing and testing each story before they decide to commit to it.

It’s important to be conscious of the way we speak and communicate with teammates on an Agile team. There is no development team, QA team, or business team; everyone collectively makes up one Scrum/Agile Team. If one person stumbles, we all stumble. Great communication and following Agile best practices are a major stepping stone to becoming a successful, mature Agile team.

Caktus GroupCreating Dynamic Forms with Django

What is a dynamic form and why would you want one?

Usually, you know what a form is going to look like when you build it. You know how many fields it has, what types they are, and how they’re going to be laid out on the page. Most forms you create in a web app are fixed and static, except for the data within the fields.

A dynamic form doesn’t always have a fixed number of fields and you don’t know them when you build the form. The user might be adding multiple lines to a form, or even multiple complex parts like a series of dates for an event. These are forms that need to change the number of fields they have at runtime, and they’re harder to build. But the process of making them can be pretty straightforward if you use Django’s form system properly.

Django does have a formsets feature to handle multiple forms combined on one page, but that isn’t always a great match and they can be difficult to use at times. We’re going to look at a more straightforward approach here.

Creating a dynamic form

For our examples, we’re going to let the user create a profile including a number of interests listed. They can add any number of interests, and we’ll make sure they don’t repeat themselves by verifying there are no duplicates. They’ll be able to add new ones, remove old ones, and rename the interests they’ve already added to tell other users of the site about themselves.

Start with the basic static profile form.

class Profile(models.Model):
    first_name = models.CharField()
    last_name = models.CharField()
    interest = models.CharField()

class ProfileForm(forms.ModelForm):
    first_name = forms.CharField(required=True)
    last_name = forms.CharField(required=True)
    interest = forms.CharField(required=True)

class Meta:
    model = Profile

Create a fixed number of interest fields for the user to enter.

class Profile(models.Model):
    first_name = forms.CharField()
    last_name = forms.CharField()

Class ProfileInterest(models.Model):
    profile = models.ForeignKey(Profile)
    interest = models.CharField()

Class ProfileForm(forms.ModelForm):
    first_name = forms.CharField(required=True)
    last_name = forms.CharField(required=True)
    interest_0 = forms.CharField(required=True)
    interest_1 = forms.CharField(required=True)
    interest_2 = forms.CharField(required=True)

    def save(self):
        Profile = self.instance
        Profile.first_name = self.cleaned_data[“first_name”]
        Profile.last_name = self.cleaned_data[“last_name”]

        For i in range(3):
           interest = self.cleaned_data[“interest_{}”.format(i]
               profile=profile, interest=interest)

But since our model can handle any number of interests, we want our form to do so as well.

Class ProfileForm(forms.ModelForm):
    first_name = forms.CharField(required=True)
    last_name = forms.CharField(required=True)

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        interests = ProfileInterest.objects.filter(
        for i in range(len(interests) + 1):
            field_name = 'interest_%s' % (i,)
            self.fields[field_name] = forms.CharField(required=False)
                self.initial[field_name] = interests[i].interest
            Except IndexError:
                self.initial[field_name] = “”
        field_name = 'interest_%s' % (i + 1,)
        self.fields[field_name] = forms.CharField(required=False)
        self.fields[field_name] = “”

    def clean(self):
        interests = set()
        i = 0
        field_name = 'interest_%s' % (i,)
        while self.cleaned_data.get(field_name):
           interest = self.cleaned_data[field_name]
           if interest in interests:
               self.add_error(field_name, 'Duplicate')
           i += 1
           field_name = 'interest_%s' % (i,)
       self.cleaned_data[“interests”] = interests

    def save(self):
        profile = self.instance
        profile.first_name = self.cleaned_data[“first_name”]
        profile.last_name = self.cleaned_data[“last_name”]

        for interest in self.cleaned_data[“interests”]:

Rendering the dynamic fields together

You won’t know how many fields you have when rendering your template now. So how do you render a dynamic form?

def get_interest_fields(self):
    for field_name in self.fields:
        if field_name.startswith(‘interest_’):
            yield self[field_name]

The last line is the most important. Looking up the field by name on the form object itself (using bracket syntax) will give you bound form fields, which you need to render the fields associated with the form and any current data.

{% for interest_field in form.get_interest_fields %}
    {{ interest_field }}
{% endfor %}

Reducing round trips to the server

It’s great that the user can add any number of interests to their profile now, but kind of tedious that we make them save the form for every one they add. We can improve the form in a final step by making it as dynamic on the client-side as our server-side.

We can also let the user enter many more entries at one time. We can remove the inputs from entries they’re deleting, too. Both changes make this form much easier to use on top of the existing functionality.

Adding fields on the fly

To add fields spontaneously, clone the current field when it gets used, appending a new one to the end of your list of inputs.

$('.interest-list-new').on('input', function() {
    let $this = $(this)
    let $clone = $this.clone()

You’ll need to increment the numbering in the name, so the new field has the next correct number in the list of inputs.

    let name = $clone.attr('name')
    let n = parseInt(name.split('_')[1]) + 1
    name = 'interest_' + n

The cloned field needs to be cleared and renamed, and the event listeners for this whole behavior rewired to the clone instead of the original last field in the list.

    $clone.attr('name', name)
    $this.off('input', arguments.callee)
    $clone.on('input', arguments.callee)

Removing fields on the fly

Simply hide empty fields when the user leaves them, so they still submit but don’t show to the user. On submit, handle them the same but only use those which were initially filled.

    .on(“blur”, function() {
        var value = $(this).val();
        if (value === “”) {

Why dynamic forms matter

An unsatisfying user experience that takes up valuable time may convince users to leave your site and go somewhere else. Using dynamic forms can be a great way to improve user experiences through response time to keep your users engaged.

Caktus GroupPrioritizing Defects

A defect, or bug, in a software product can be defined as a flaw in the system that leads to a measurable or observable deviation from its expected result. During development, it’s part of the quality assurance process to prioritize defects in order to minimize the impact to the end product and meet the agreed-upon quality level for the product. This prioritization can seem like a dark art. How do we decide what gets addressed and what doesn’t?

Assessing the impact of a defect

We fix bugs because they have an impact on the product being built. Since resources (in the form of time, people, money, etc.) are limited, we devote them to fixing bugs that have the highest estimated impact. We can assess the estimated impact of a defect using two metrics: how much the defect may cost and how severe it is.

How much is this defect going to cost us?

A defect in a live software environment has three kinds of costs associated with it: direct costs, indirect costs, and correction costs.

Direct costs

Defects that directly impact the ability for a software product to earn money, or directly lose money, result in direct costs. A defect can result in data loss, incorrect orders, decreased user costs (or increased user costs that you later have to reimburse), or damage to software, hardware, or people.

A famous, and extreme, example of direct defect costs is the 1998 Mars Climate Orbiter mishap, where a multi-million dollar mission to Mars ended in the disintegration of the orbiter module due to a calculation error: trajectory units were calculated using imperial units rather than metric.

Indirect costs

Indirect costs are incurred when the end user is dissatisfied with the product. These costs can take the form of lower-than-expected sales, increased tech/customer support requirements, legal fees, cancellation of licenses, etc.

For example, if your software product allows the purchasing of items, but only 50% of users who start to purchase an item actually complete the transaction, there’s a significant portion of potential revenue that you are not recognizing because half your end users are giving up. Maybe there’s a bug that occurs when they edit the quantity of an item in their cart and the checkout form loses their shipping address. This seems very simple, but the indirect cost of frustration can be high.

Correction costs

The costs of fixing defects increase as you proceed through the stages of the software development life cycle (SDLC). If the project is over, you may end up paying more for developers to dive back into the code, debug, and resolve, than you would if they were actively working in the code prior to release.

Complexity also increases correction costs; the more complex a system, the more entangled a defect is likely to be in other components. This entails more debugging, unit testing, and regression testing. A live product is the most complex, thus the cost of fixing a defect is most expensive after it has gone live.

These costs may present themselves as dollar amounts, but often require time and staff resources as well. The general rule is, the earlier in the SDLC a defect is identified and corrected, the lower the cost.

How severe is the defect?

Severity is a factor used to identify how much a defect impairs product usage. There are many scales of severity, but an example is:

  • Severity 1 - System failure or crash, product is unreleasable (often labeled as Critical, Urgent, or Must Fix).
  • Severity 2 - Malfunction of a component or system, core functionality is impaired.
  • Severity 3 - Incorrect function of a component or system, functionality does not work as intended. Product is usable but with workarounds.
  • Severity 4 - Minor deviation, system is usable.
  • Severity 5 - Very minor deviation, system is usable.

Severity is usually set by the testing team.

Assigning priority

Now that our project manager (PM) has assessed the cost and severity of the defect, they consult two additional aspects of the project to assign priority: schedule and quality bar.

Project schedule

The status of the project schedule helps inform how urgent it is to correct the defect. Fixing bugs requires resources and it’s important to know resource availability. Project progress against the schedule can also inform what types of defects need to be addressed. In general, the closer the project is to release, the more discerning the PM needs to be about high priority defects.

Example 1: The project is a video game and the next milestone is a live demo for a specific conference. Everything included in the demo needs to be correct, so defects related to this feature set may be prioritized high. Defects that don’t apply to the demo may be prioritized as low until after the conference.

Example 2: The project is a web app that is two weeks from release. Defects that can’t be reproduced reliably, or don’t impact core functionality, or only impact a very small number of users, may be deprioritized in order to make room to fix defects that do impact core functionality and large sets of the user base.

Quality bar

If a level of quality is targeted and agreed upon by stakeholders, this quality bar can provide direction regarding priority. If the documented level of quality for a project states that all severity 1, 2, and 3 bugs occurring on a specific browser will be fixed, those bugs will get prioritized over those that occur in another browser.

Note that severity does not equal priority. Often you will see correlation between the two, but it should not be assumed that a severity 2 will be a high priority issue, or that a severity 5 will be a low priority issue. For example: a spelling error is usually a severity 5 issue, since it doesn’t affect the usage of the system. However, if the spelling error is in the product or company name, that defect is a High priority issue.

Deciding what to fix and what to defer

Priority scales generally go from high to low, with varying grades in between. Defects are usually fixed in order of priority, from high to low. In every project it becomes necessary not only to assign priority to defects, but also to call out specific defects that will be deferred or not addressed during this phase of development. Some defects aren’t worth the resources needed to correct them.

An example might be a graphics error: if you’ve played a 3D video game, you’ve no doubt encountered visuals in the environment that overlap or are placed oddly. Maybe there are two pieces of terrain that intersect incorrectly, or a plant that is floating above the ground. These are simple issues, but fixing them requires a person to edit the environment, a new build process, and a new testing pass.

Any edits to the environment, which is complex and made of many intricate pieces, has the chance to cause new defects. At some point the cost of fixing a floating plant is not worth it because the impact on the end user is negligible, the game still works, the potential to introduce more defects is too high, and assigning resources to this defect means taking resources away from fixing defects with more severe consequences.

Deciding what defects make the cut and which ones don’t is not done lightly. However, those decisions are crucial to releasing products in a timely fashion without an extreme amount of expense. Often PMs will involve stakeholders, QA, and other team members when making these decisions to ensure that all parties are accurately informed and able to give their input. Remember: bugs are part of software development.

Choosing which bugs get squashed and which bugs enter the wild is difficult, but necessary for the success of your product. Resources not absorbed by fixing the small, low-impact defects are resources that can be reallocated to delivering high quality, high value features.

If you need assistance prioritizing an overwhelming list of defects, you can reach out to our experienced project managers, QA team, and developers who can advise on best practices.

Caktus GroupCaktus Recognized as Top Web Development Company in Raleigh

Since Caktus’ founding in 2007, we have dedicated ourselves to growing sharp web apps the right way. The tenets of our Success Model drive us to focus on strategic partnerships, prioritize the most valuable features, develop for scalability, and recruit a sharp team. We’re pleased and honored that this focus on doing things right has been recognized by leading review website Clutch.

In their most recent research into the top development and IT consulting firms, Clutch named Caktus number one for web development and number two for app development in the Raleigh area. As long-time Triangle residents, we’re proud that our locally-grown Django and Python apps have gained such recognition. We also congratulate our team; our strength comes from the many skilled people we have pulling together to make each project a success.

Of course, those familiar with Caktus will know that building custom websites and apps is just part of the picture here. We care about being responsible members of the tech community, giving back to local charities and meetups. Our hope is that by doing our part, we can contribute to the Triangle’s continued expansion as a great place to live and a magnet for top talent - a benefit to everyone.

Learn more about what we do at Caktus:

We look forward to growing more sharp web apps and continuing our practice of doing things right. Get in touch to speak to our team about your web development project, product discovery or consulting needs, and team augmentation.

Caktus GroupThe Users We Don't Know

Businesses and organizations come to Caktus to build custom web applications that will help solve their users’ problems. Before contacting us, clients spend time and effort thinking through their users’ problems. But in doing so they do not always talk directly to potential users of the application. As a result, they come up with ideas for the application based on who they think the target users are rather than who they know those users are. Or, if they have correctly identified their target user segment, they may make assumptions about the ways users think, behave, or accomplish tasks.

Know Who Your Target Users Are

The identification of a target user segment is particularly important for an as-yet undefined idea for a custom web application. It is also useful to validate assumptions about the target users of an existing application that needs redesign.

On a recent project, the client came to Caktus with a well thought-out concept, a solid definition of the project goal, and a description of a potential user base. We suggested conducting UX research to better understand the users before taking a deep dive into designing the system. Together with the client, we recruited a small sample of users for a qualitative study, in which we interviewed people who met the profile of the target user segment.

The client participated in user interviews and had a direct, real-time access to insights from this research. Before the study was over, we realized that the primary target user segment for the application they wanted built should not be the group that had originally been identified, but a segment with a different set of psychographics (attitudes, motivations, aspirations, needs, etc.). This important discovery will guide the design and development of the application. Without UX research, we would have built software for the wrong primary audience.

Know the Target Users’ Mental Models

The false consensus effect occurs when we overestimate how many other people agree with our beliefs or share our behaviors. In software design and development, it happens when we assume that the way we think about how our application should work aligns with how users think about it — when we project our mental models onto our users.

It’s easy to assume that users will think about accomplishing tasks within an application the same way we do. Unless we talk to users, we won’t really know for sure. And unless we know, we run a risk of spending significant effort and resources on building a system that doesn't make sense to users.

On a website redesign project, the client provided initial sitemap and navigation labels consistent with the way they thought about their content. We conducted UX research, including card sorting, treejack testing, and usability testing, to define the new information architecture and the site navigation.

One navigation label in particular presented a challenge with users consistently unable to locate content it was supposed to represent. In a series of consecutive user testing sessions, we were able to tease out words users were mentioning when describing that content type. We adjusted the navigation label accordingly and as a result, most users could locate the content correctly.

Without UX research, the website navigation would have rendered an important section of the website content undiscoverable to most users, leading to a missed opportunity to deliver value to those users and jeopardizing the achievement of business goals.

Bottom Line

UX research may sometimes seem like a redundant effort and an expenditure that can be avoided. After all, we “know” who our users are, what they need, how they work, and how they want to use the applications.

Or do we? Making assumptions about who our target users are and the mental models guiding their behavior without verifying those assumptions through UX research can be a costly mistake. A series of small-scale, qualitative studies can significantly reduce the risk that we’ll be building the wrong thing for the wrong group of people.

At Caktus, we recommend including UX research on a project. We also offer UX team augmentation services to support our clients’ requirements gathering and research efforts.

Caktus Group5 Scrum Master Lessons Learned

March 2018 marked the end of my fourth year as a Scrum master. I began with a Certified ScrumMaster workshop in 2014 and haven’t stopped learning since. I keep a running list titled “Lessons Learned,” where I jot down thoughts I find significant as they occur to me so that I can go back to the list and draw from my little bank of personal wisdom.

Some of the items on the list are practical (“Estimate size, derive duration!”), some are abstract (“Don’t let the process get in the way”), and some are just reminders (“Stop being resistant to change, let yourself be flexible”). They are the distilled product of my experience working with Scrum teams. Here are a few that I would like to share with you; I hope you will find them useful in your own path.

1. Learn about people

Learning about Scrum and Agile is essential to a Scrum Master’s formation. There are multitudes of books, blogs, podcasts, and other materials available to fulfill that need. However, a more well-rounded curriculum includes learning about people. After all, the Agile manifesto begins with “Individuals and interactions over processes and tools.”

The Scrum Master role is about dealing with people and how they communicate and work together, more than it is about process and process frameworks. Without understanding people and the nuances of their interactions, how can a Scrum Master be an effective servant leader?

I suggest reading about teams, leadership, management, psychology, and anything else that might give you insight into people and how they work. Here are some examples:

  • The Five Dysfunctions of a Team by Patrick Lencioni is an excellent place to start. It is an enlightening introduction to the roots of the problems you have likely observed on your own team, and gives some practical ideas for how to address them.
  • If you work in software development, chances are you have some introverts on your team. Quiet: The Power of Introverts in a World That Can’t Stop Talking by Susan Cain will help you understand what it’s like for those individuals to work on a team and how you can help them.
  • Drive: The Surprising Truth About What Motivates Us by Daniel H. Pink gives great insight into the intrinsic motivation factors of autonomy, mastery, and purpose that align neatly with the Agile values.

2. Buy sticky notes - lots of them, all the colors

Not just sticky notes: index cards, colored markers, sticky dots, funny stickers, cork boards, whiteboards, magnets, flip charts, tokens, butcher paper, the list goes on. If a team is co-located (even temporarily), physical exercises will come in handy. I don’t mean jumping jacks, but activities where everyone is actively participating instead of watching one person move virtual cards around a virtual board on a screen.

The team will feel more engaged and involved if they are standing, moving around the room, physically doing the planning, or the writing, or the moving of cards. I have observed this firsthand in activities like user story mapping, user story writing workshops, sprint planning, retrospectives, and daily standups.

The act of writing on paper can be much more powerful than typing and can be more easily displayed publicly. A team’s Definition of Done should be visible and obvious - write it on a flip chart sheet and hang it up on the wall of the team room (better yet, have the team write it together). The act of writing will help them remember it, and help them own it.

A physical burndown chart that team members update every day as part of standup brings everyone’s attention to it, where they might not think to go look at their digital tool. Have fun with it too - one team I work with uses emoji stickers to mark tickets that will require lengthy QA time.

3. Don’t force it

It may be intuitive for many people who find themselves in the role of Scrum Master to try to make development teams conform to Scrum, or make organization leaders see that they need Agile in their lives, or make managers understand how they should interact with the team, or make their team adopt new engineering practices.

This type of “command and control” approach is not compatible with the Agile mindset, and can be extremely detrimental to fledgling teams (even more so to ones who have already been working in Agile), who will chafe at being told what to do and react negatively. It’s also going to be frustrating for you when it doesn’t go your way - and it won’t.

Instead of trying to force the results you want, first examine your reasons for wanting those results: is it because it’s in the best interest of the team, or is it just what you want? Then consciously let go of what you want, even if it’s what you think is best for the team.

Start asking questions: Why isn’t the team paying attention to the sprint burndown? Ask them instead of becoming frustrated when they aren’t heeding your daily reminders to stay aware of the sprint’s progress. Maybe it’s hidden away behind some easy to miss menu in their digital tool, or maybe they don’t feel ownership of the sprint work enough to care about its progress. Why aren’t team members practicing pair programming daily, when you have repeatedly made the case for its usefulness? Maybe your team is composed of introverts who are uncomfortable sitting close to others and talking out loud for extended periods of time, and feel they produce their best results when they are allowed to achieve a state of flow in isolation and privacy.

Ask the team, instead of trying to come up with the answers on your own. It’s easy to think or assume you know what the other party’s motivations are, but the only way to know is to ask. Once you understand the reasons why by asking the right questions, you can begin addressing the root cause in a way that will truly help whoever you’re working with achieve their goals, not the results you want from them.

4. Curb your inner helicopter Scrum Master

Let the team fail and recover on their own instead of swooping in with advice or corrective action at every sign of danger. Not letting the team make mistakes seems intuitive - after all, you are partially responsible for their success, and failure may reflect negatively on your work with the team.

It is the responsibility of the Scrum Master to ensure that impediments are removed, and you may see future pitfalls as impediments in the team’s way. However, it will be more beneficial for the team in the long term to help them learn how to identify those dangers and take action themselves, rather than relying on you to constantly be on the lookout in their stead.

It is a core concept of the Agile mindset that learning from mistakes is more effective and valuable than learning from success. Instead of preventing mistakes and failures, ensure that the team has a safe environment to make mistakes, where failures are low-risk and low-impact:

  • You can foster a culture of trust where the team will not be afraid of ridicule and repercussions for making mistakes.
  • Working in short iterations means that, if unsuccessful, one sprint won’t be likely to sink the whole project.
  • You could encourage the use of testing environments where experimentation can be carried out safely without impacting the live product and its users.
  • Continuous integration and deployment practices make implementing and testing small changes to the code effortless, and help lower risk at the time of release.

Don’t attempt to solve or prevent all of the team’s problems for them like a helicopter parent might “hover” over their children, even if the solution is obvious to you. Instead, let them make the mistakes, and ensure that they can learn from them and use that knowledge to improve as a team and prevent future mistakes.

5. Know what success looks like

When a team is first formed or adopting an Agile framework for the first time, they will likely need the Scrum Master to guide them through everything, from facilitating every meeting to removing every impediment. A good Scrum Master can shine in these moments, jumping at every call for help and doing everything they can to see their team through difficult situations.

It feels great to be needed and depended upon for your expertise, and there’s a lot of career advice that emphasizes the benefits of making yourself indispensable to gain recognition and job security. But is it actually a good thing when a team that’s been working together for months or years continues to look to their Scrum Master for help at every turn?

I believe that the best sign of a Scrum Master’s success is that their team no longer needs them. This will mean that he or she has set their team up to be independent, self-organized, empowered, and striving to continuously improve without being pushed to do so. This isn’t going to happen overnight, and it will require careful consideration about whether the team is ready to take over the responsibilities that the Scrum Master has been fulfilling.

A good way to pilot this is to just not show up and see what happens. Don’t attend every standup, miss a sprint planning or retrospective every now and then, or even take a vacation and don’t worry about having anyone fill in for you. Did the team keep functioning normally? Maybe stay silent during a conflict. Did the team resolve it without your input? If yes, then you have achieved success: your team no longer needs you.

So what now? You don’t have to dust off your resume quite yet. The team may still need your assistance in some cases, such as removing organizational impediments that are outside their sphere of influence, or individual team members may still need coaching. You may be called upon to see them through some major changes, or help them kick off a new project.

You will also expand your efforts working with others in the company, such as managers and executives, to help them create an environment where the development teams can continue to flourish. Check in with your team at regular intervals - even if they don’t need you, they may still want you around!

Caktus GroupCaktus at PyCon 2018

We’re one month away from PyCon 2018 and are looking forward to this year’s event in Cleveland, OH. Caktus is proud to sponsor once again and will be in attendance with a booth.

Caktus Booth

Building and renewing contacts in the Python community is one of our favorite parts of participating in PyCon. Stop by our booth May 10-12 to talk about Python and your next custom web development project, plus swag, games, and giveaways.

We have two Raspberry Pi 3 kits to give away to lucky winners. All you have to do to enter is take a quick survey at our booth and leave your email address so that we can contact you if you’ve won.

Some of you may remember our Ultimate Tic Tac Toe game from last year. Since then, our developers have been hard at work improving the AI and transferring it to a Raspberry Pi. We only had a couple of champions last year. Will you beat the game this year?

Kurtis, the winner of last year's Ultimate Tic Tac Toe game.

For those attending the PyLadies auction on Saturday, May 12, a gorgeous scarf will be up for grabs. Hand-made by local Durham weaver and fiber artist Elizabeth Chabot, this piece in Python colors will let you show off your love for the language in style.


One of the reasons our team loves PyCon is the opportunity to keep skills sharp and learn from the range of excellent talks. This year they’re excited about:

Some of these will likely appear in our annual PyCon Must-See Talks series, so if you can’t make it this year check back in June for the attendees’ top picks.

Job Fair

Are you a sharp Django web developer searching for your next opportunity? Good news - we’re hiring! View the spec and apply from our Careers page. We’ll also have a table at the job fair, so come meet the hiring manager and learn more about what it’s like to work at Caktus.

Don’t be a stranger!

Come say hi at the booth, look for members of the Caktus team in our new hoodies, or set up a meeting in advance to schedule a dedicated time to meet.

The new Caktus hoodie, in teal with a white logo.

Whether you’re at PyCon or following along from home, we’ll be tweeting from @CaktusGroup. Be sure to follow us for the latest updates from the event.

Hope to see you in May!

Caktus GroupAgile for Stakeholders

In Agile development, a stakeholder is anyone outside the development team with a stake in the success of the project. If you are a stakeholder, knowledge of Agile will help you understand how the project will be developed and managed, when you can expect to see progress, and what the team needs from you in order to deliver their best work. Understanding these basic concepts and what your role entails are essential to your project’s success.

What is Agile (and why should you care)?

Agile was invented as a set of values and principles to guide software development teams in adapting to change and acknowledging unknowns. In development, an enormous amount of time and energy can be spent on managing change: changing expectations, changing market landscapes, changing requirements, and changing knowledge of the work.

Since change is a constant, it makes sense to build a process that takes it into account as expected. Agile is an iterative, incremental approach to software development and delivery that allows for uncertainty and change.

There are many methodologies, practices, and processes that fall under the “Agile umbrella.” For example, you might have heard of Scrum, Kanban, user stories, or sprints. These may or may not be used by the development team you work with. You should feel free to ask about them if you are curious about the team’s internal workings, but none of them are necessary to understanding the gist of what Agile is and how it works.

Why Agile?

Agile was introduced as a reaction to “waterfall” development, where work is done in long, consecutive phases of requirements gathering, analysis, design, coding, testing, which can each last weeks or months. While there is nothing inherently wrong with this approach, it does present significant challenges.

Time to market

If you want to launch your software in a competitive market, you may need to assess whether spending years on development before being able to release anything will be viable for your business. During that timeline, it’s possible that you will be outrun by your competition, or that the market will change in such a way that your product will no longer be cutting edge, or even relevant at all. Technology changes quickly, and so do consumers’ needs and expectations - you will need to be able to keep up.

Running out of time

Imagine that your waterfall project deadline is fast approaching. It’s likely that development is either in the coding or testing phase. If the work is running behind schedule and that deadline can’t be pushed out for business or budget reasons, either scope will have to be cut during the coding phase, or the development team will have to burn budget in scrambling to get the initial scope implemented in time.

The testing phase might also be cut short, leaving little time to test the software, identify and fix defects. All aspects of the project suffer in this case and it is likely to result in a low-quality product that will not meet your customers’ needs.

Measuring progress

In waterfall, working software isn’t produced until the coding phase has completed (relatively late in the overall development schedule). This makes it difficult to measure progress and know if the project is on schedule, or how close it is to completion. You could be more than halfway through your timeline and have nothing more to show for the time and money spent than documentation of requirements and designs. If the project runs out of budget at this point, there is no part of the software that is usable and your investment is wasted.

Increasing risk

In traditional development, risk only increases as the project progresses because the work cannot be validated, from a technical and from a business standpoint, until the last phase of development. If any major problems are uncovered in the testing phase (such as issues with the basic architecture of the app), it will require significant rework.

The rework might entail going back to the beginning phases and revising requirements and designs, then refactoring code. This will have a major impact on the project budget.

Change requests

The waterfall approach to software development does not support responding to change quickly or efficiently. If any changes to the requirements are brought up during the requirement gathering phase, they can probably be incorporated fairly smoothly. However, the farther along the project is, the more complex and time-consuming it will be to make any changes.

Waterfall relies heavily on rigid requirements because they have to be handed off to a design team, who will then pass the designs to the coding team, who will then hand off software to a testing team. Any need for changes to the end product requires a change request going through each team in turn, which will take more time the farther along the project is.

All of this does not mean that development can’t be done in this way. Waterfall has become somewhat of a dirty word in development, but this is not necessarily warranted. Some types of development work are fine done in long, consecutive phases with delivery at the very end, if there is no uncertainty about the work and if the capacity and capability of the development team(s) is a stable, known quantity. However, these ideal circumstances are rather rare. This is where Agile can help.

Agile in Practice

Since the concepts of Agile are generally abstract, it can be a struggle for anyone unfamiliar with this approach to understand how it works and why it matters. As a client, you might begin to ask yourself why any of this is relevant to you; if this is the way that development teams need to work, then great - they should do that! But your role and participation as a stakeholder are vital to the success of this approach.

This section provides an overview of how Agile development works in practice and what you can expect, as well as what the team will expect of you.

Step 1: Break it down

When development is cleared to begin (generally after some initial discovery work), the first step for the development team will be to break down the work into small chunks. While there will still be many unknowns at this point, this is a good place to begin.

Once the team has enough information to get started, they will generate a list called the “product backlog.” Each item in this backlog will represent some small piece of functionality for the product, such as a user’s ability to perform a specific action (e.g., logging in). These small pieces are what will allow the team to implement features in an incremental, iterative way.

For you as stakeholder, this step can include participating in a discovery workshop; story mapping activities; and discussing project vision, goals, and strategy. The purpose of this early collaboration is to reach a shared understanding of what the team will be building and why. They will have questions for you about initial scope, specific features, content strategy, and more. Alignment between you and the team at this stage is what will start the project off on the right foot.

Step 2: Estimate everything

Once the backlog for a new project has been created, the developers will estimate each backlog item. This step is somewhat optional depending on the nature of the project and on the team’s established processes. The estimates will help them (and you) understand how much work each item will be to implement relative to the other items in the product backlog. Most Agile teams use a point system to do this.

As stakeholder, you may have visibility into those estimates, which will help you give input on prioritization decisions throughout the project. It’s also important to remember that estimates are just that - estimates. They will be imprecise (and sometimes inaccurate), but they will be updated and refined as the team progresses through the work and accumulates knowledge.

Step 3: Prioritize, prioritize, prioritize

The product backlog isn’t ready to be worked on until it has been prioritized. Prioritization will be based on multiple components, including business value, estimates of effort, and various risk factors. It’s important to note that the reason the backlog is one unified list is that the priorities will be ordered from top to bottom: each item in the backlog is a higher priority than the one directly below it. This means that no two items can be the exact same priority, purposefully forcing some tough decisions.

The team’s product owner (PO) is responsible for maintaining the backlog, ensuring that it is clear and accurate. The PO will need your help, however, to understand the details and value of the backlog items. He or she is likely to ask for your input on high-level feature priorities and will ensure that the backlog is prioritized correctly to make the most use of development time.

Step 4: Start building

Once the product backlog is prioritized, the developers can begin implementation. They will pull items from the top of the backlog only. The most important work is always done first, saving less important work for later in the likely event that the team does not get through the entire backlog before time or budget runs out.

When the team selects a backlog item to work on, it will go through multiple phases in quick succession, such as analysis, design, coding, testing, and validation. While this sounds very much like the waterfall phases outlined above, the difference is that each backlog item moves through these phases individually and relatively quickly thanks to their small size.

As stakeholder, you will be kept up to date about what the team is working on and when you can expect to see new functionality.

A note about sprints

You may hear the development team refer to sprints, or say that they work in sprints. A sprint, or an iteration, is a timebox in which the team completes a set of backlog items. Not all Agile teams work in sprints and sprint length varies depending on the team, between one and four weeks.

At the beginning of a sprint, the team identifies high priority work from the backlog that they can complete in that timeframe and commits to getting it done. Once the sprint has started, it’s important for current priorities to remain stable, meaning that the work pulled into the sprint can’t be switched out for other work.

This allows the development team to focus on finishing a set of backlog items without interruptions or distractions, also limiting work in progress for efficiency. The backlog priorities can still be updated at any time, until the beginning of the next sprint.

Step 5: Review and feedback

Once the team has completed an item or a set of items from the product backlog, the work will be presented to the stakeholders for review. This is usually a live demo, but can also be just a notification that new functionality is available for you to look at yourself. You can expect that, unless noted otherwise, the completed work is fully tested and functional.

If the stakeholders are happy with the work, great! The team will move on to the next items on the backlog. Otherwise, any requested changes are entered into the backlog as new items and prioritized along with everything else left to do.

Reviewing completed work and providing feedback is your most important responsibility as a stakeholder. The team needs to know if they have built the right thing, whether it matches your expectations, and confirmation that they are heading in the right direction.

The team also needs your negative feedback. It’s always nice to hear what you like about what they have built, but it’s important for them to hear what you are unhappy with in order to course-correct and improve. (Check out this post for some of the techniques we use to gather your feedback.) Early and regular feedback is crucial to the Agile approach.

Evolving the backlog

The backlog changes constantly. Items are added, deleted, rewritten, re-estimated, and reprioritized. The backlog is a living artifact that is updated as work is completed, feedback is gathered, new information is acquired, new knowledge is gained, and new ideas are generated. It becomes a sort of wish list, rather than a set of rigid requirements.

As stakeholders, you can always request that new items be added to the backlog. Be prepared to answer questions about how important those new items are to you in comparison to the others. Remember that adding something to the backlog means bumping something else down in priority.

Step 6: Adaptive Planning

In order to predict when the project will be ready for a release or another milestone, the product owner will create a plan based on the pace of development and how much remains in the backlog. The product owner will use this plan to forecast either a date by which a determined scope (set of backlog items) can be completed or how much scope can be completed before a determined date.

If the forecast shows that the desired scope can be completed for the desired date, then no changes are required. If things change along the way, the plan is updated to reflect the changes, either by decreasing scope or pushing out the date.

Although you will have visibility into the plan early, it’s important to remember that it will inevitably change. Some flexibility in scope or time is absolutely necessary for any development team to deliver high-quality work.

The Bottom Line

The point of Agile is to start by building something small and simple, validating early and often that it’s the right thing or heading in the right direction, and then iterating by improving on it or adding more to it. This is in opposition to more traditional approaches in that you as stakeholder don’t have to wait until the end of a project to see the work. You are part of the process; you have visibility into real, measurable progress. You can change your mind as you also learn about your product and its users along the way. By playing an active role in the process, you can help ensure your product's success.

Caktus GroupShipIt Day Recap Q1 2018

Another quarter, another ShipIt Day! Take a look at what our team dove into in the first part of 2018.

Digital linguistic resources

Neil recently discovered there has been work done to create digital resources for his favorite language, Coptic. The database is a collection of normalized text that has linked words that point to dictionary entries. He wanted to branch off the projects that exist, get his own project going, and make improvements to the interface.

Using Elasticsearch to process the data files and index them resulted in a huge collection of XML files, one for each letter in the Coptic alphabet with additional spelling and grammatical information. Neil used Postgres and Adjacent fields to store this data, then hooked it up to a search interface. He also set up a Digital Ocean droplet to host everything.

In the process, he also found out that he indexed 501 lexical entries — about all the droplet can handle. In future, he’ll work toward an improved version of the dictionary.

Learning React

As part of efforts to standardize the front-end tech stack, Kia worked on learning more about React. Although tutorials with real-world examples were difficult to find, she was able to think of situations in which React would come in handy. Its power really lies in modular small pieces you can chain together and create neat user interface experiences.

Kia looks forward to introducing that to some potential projects in the future and will continue learning React on her own time.

Mark also spent some time learning React and built a puzzle game, similar to a handheld number tile puzzle. He liked a tutorial which spoke more to React being a JavaScript tool and not needing a build system or fancy syntax. It walked you through building React components for someone already familiar with writing JavaScript, which he found to be a useful reference.

Get the code for the game on GitHub.


Dan was curious about Brython, a Python 3 implementation for client-side web programming which lets you write Python that runs in the browser. He decided to build a replica of the mobile game Flow Free using the tool. He did all the logic in Python, which was familiar and easier for him than the usual.

Scrum Trouble board game

Gerald wanted to come up with a creative way to incorporate some of the principles of Scrum and the things we see in sprints on a day-to-day basis. The result? The Scrum Trouble Board Game!

A few of the cards from the Scrum Trouble board game.

His game adapts Trouble and Exploding Kittens with game mechanics like Sabotage (things that can go wrong), Action (actions to overcome sabotage), and Generic cards (perform no action but can be combined with other cards to gain action cards from other players).

It emphasizes the importance of QA and testing, and enables learning of Scrum principles and Agile thinking in an engaging way.

Neural network image classifiers

Calvin explored neural network image classifiers, using an intro to wraparound tensorflow to follow a, “Is this a cat or dog” tutorial.

He found it straightforward to run convolutional network analysis and then have the wraparound do the math. After that, he adjusted the layers and started the training. Calvin built a command line tool that separates the images into their appropriate categories, which he feels went well and plans to keep iterating on it for improvements.

As part of the process, Calvin also made the tutorial more generic to increase flexibility, for example, to use any animals and not just cats or dogs.


Inspired by Ned Jackson Lovely’s talk at Pycon 2014, Scott worked on getting a remote control helicopter to fly using an Arduino and Python code. He got the LEDs to blink and then got it to fly!

In the process, he found the Arduino is a great way to do embedded programming because it makes it super simple to transfer code from your computer. There was already an existing Python library for this helicopter, making it an ideal project to test.

Mapping user experiences

UX designer Basia read Jim Kalbach’s book Mapping Experiences and was inspired to think about how the techniques of mapping user experiences are applicable to the work we do here at Caktus.

In order to map experience at the level of applications we build for our clients, we conduct user story mapping. However, if we think about what we do as helping our clients deliver value to their users, we also need to consider mapping user experience in terms of finding value alignment and adjust it accordingly.

Customer journey mapping.

Three maps that could empower us to find more value for our clients include:

  • User experience map
  • Service Blueprint
  • Customer (or User) Journey Map (CJM)

Each maps the exchange of value in a different way and could provide additional insights for our clients.

Tequila conversion

Dmitriy worked on converting the Caktus website from Margarita to Tequila. He successfully got part of the way through. In the process, he thought of some suggestions for improvements to the documentation, including some formatting changes. Dmitriy also found some things to improve on the Caktus website that he will implement on as part of ongoing improvement work.

Redmine project board

One of the Caktus development teams uses a physical board to track projects and progress. However, it can be hard to keep track of all of tickets when working remotely.

Phil sought to create a board using the Redmine, JIRA, and Vue.js APIs. JIRA doesn’t allow cross origin (CORS) API calls, but he was able to make a UI board with blue stickies. It currently has no moving functionality, but you can enlarge the ticket so it is more readable. He’s looking into a workaround for the problem with the JIRA API.

He would eventually like to add functionality, including making comments, assigning tickets, and moving tickets.

Diversity and inclusivity in the hiring process

As part of Caktus’ ongoing hiring efforts, Liza worked on improving the diversity and inclusivity of the hiring process by testing Textio, an augmented tool for writing job descriptions. Textio analyzes job location and industry/field as well as the language of the job description to make recommendations on word choice, tone, and structure. The tool is best known for helping companies develop more engaging job descriptions with consistently balanced and inclusive language, thereby attracting more diverse talent.

Improving skills

Charlotte started reading a book called Coaching Agile Teams, while Robbie studied for the ISTQB software testing certification. Jeff read High-Performance Django while helping out with deployment issues on other projects.

Show me more!

To find out what we've done for past ShipIt Days, see our other blog posts.

Caktus GroupWhen a Clean Merge is Wrong

Git conflicts tell us when changes don’t go together. While working with other developers, or even when working more than one branch by yourself, changes to the same code can happen. Trying to merge them together will stop Git in its tracks.

Conflicting changes are marked in their files with clear indicators to show what changes Git couldn’t figure out how to merge on its own. Current changes are shown on the top and the changes to merge in are shown below.

Changes in a Git merge.

When the merge does not have any conflicts, everything is fine and you can move on with your day.


This was just an example, but here’s another set of changes from two branches I made recently. In one branch I was sorting a sequence of templates:

A code block sorting a sequence of templates.

In another branch I was adding an “Introduction” page at the beginning of the same list of templates:

A code block showing the addition of an introduction page to a list of templates.

Both of these branches were merged to the mainline branch. I expected them to have caused a conflict, but they didn’t. Git decided it could figure out the order in which I wanted these two lines added to the same place.

A code block showing the effect of the combined merge.

It might be clear from this GitHub diff what’s wrong with the way Git merged the two changes together. First, I’m inserting that new page to the beginning of the list. But second, I’m sorting that same list so the new page is no longer at the beginning.

The bug, caused by a merge that looked clean, had to be fixed in yet a third pull request (PR). That’s something I want to avoid in the future and, thankfully, that’s actually pretty easy with some forethought.

The simplest protection? GitHub’s Protected Branches feature. I can turn this on in the Settings section of the repository, in the Branches section.

Menu item for navigating to GitHub's Protected Branches feature.

I want to protect the develop or master branches. I also want to control all the PRs that merge into it. First, add the branch to be protected.

Selecting a branch in Protected Branches.

Next, enable three settings:

  • Protect this branch, enabling branch protection
  • Require status checks, enabling conditions that have to be met before a PR can be merged
  • Request branches to be up to date, making one of those conditions be that all the PR have the latest changes from the upstream branch merged into it, first, before it can be merged and closed.

Setting up branch protection.

This will stop anyone from merging a branch in the future when it hasn’t been updated so that you can get a chance to see the results of the merge before you actually push it upstream.

There are more options you can enable that can give you even stronger safety nets. GitHub can run the test suite automatically using a continuous integration (CI) service like TravisCI or CircleCI. GitHub does their best to make the process painless to set up. CI integration will run the whole test suite when someone creates a PR, when it gets updated, and when branches are merged, GitHub won’t let you merge a PR if the CI hasn’t given it the green light. This may slow down workflows, but it is worth it to know the right things are being merged safely and can save you time in the long run.

Of course, it won’t do everything. Once a branch is updated with the latest from a master or develop branch, a safety checklist should be followed:

  • Have an extensive test suite and be sure that any new changes or additions to a branch are covered by new or adjusted tests.
  • If behavior is added or changed, update tests accordingly to ensure changes are verified when updates, merges, and future changes could break them.
  • Check the project after merging, even with a quick smoke test. Don’t assume changes that looked fine on a branch won’t break merged. Look again.

Developers rely on a lot of tooling. Sometimes tooling fails and some of those times more tooling is actually a good solution (like GitHub helping protect us from common Git mistakes), but don’t forget the human solution of simply being more vigilant.

One last note: Protected branches can be great for small teams, where the team is likely to only have a handful of PRs open at any one time. For a larger team, it may become burdensome that every PR needs both updated and CI run again if the number of PR open (and thus affected by every merge) is much larger. In this case, teams may need to find ways to coordinate better or other tooling options that could work better in those situations.

Read more posts by Calvin on the Caktus blog.

Caktus GroupWhat is Software Quality Assurance?

A crucial but often overlooked aspect of software development is quality assurance (QA). If you have an app in progress, you will likely hear this term throughout the development life cycle. It may seem that coding is the brunt of the development work, since without code your app doesn’t exist, but quality assurance efforts often consist of up to 50% of the total project effort (1) (and part of the QA effort is coding). Without quality assurance, your app may exist but it is unlikely it will function well, meet the objectives of your users, or be maintainable in the future. QA is important, but what exactly is it?

QA factors

Software quality assurance is a collection of processes and methods employed during the software development life cycle (SDLC) to ensure the end product meets specified requirements and/or fulfills the stated or implied needs of the customer and end user. Software quality, or the degree to which a software product meets the aforementioned specifications, comprises the following factors as defined by the ISO/IEC Standard 9126-1: functionality, reliability, usability, efficiency, maintainability, and portability. The following sections will go over what these factors are in more detail, and how quality can be assessed for each.


Functionality, as an aspect of software quality, refers to all of the required and specified capabilities of a system. High quality is achieved in this aspect if implemented functionality works as described in the specifications. Arguably, you could have a software product with high functionality that does not have any of the remaining aspects and is still useful to some extent. The same cannot be said for the other quality assurance factors.

The key to ensuring correct functionality in a software product is to start specifying functionality early, in the discovery phase. Requirements need to be teased out, defined, and recorded. This can be done in a discovery workshop or other forms of requirements gathering, and will continue to occur throughout the SDLC. Requirements often change throughout a project, and it’s important that any changes be documented and communicated to all parties.

With documented specifications, functionality can be assessed during development with white box testing techniques like unit tests or subtests and black box testing techniques like exploratory testing.

At Caktus, white box testing is primarily handled by our developers, while black box testing is the domain of our Quality Assurance Analysts. Functionality assessment occurs in every step of the development process, from initial discovery to deployment (and future maintenance).


Reliability is defined as the ability of a system to continue functioning under specific use over a specific period. In order to assess reliability, it’s important to identify how the software will be used early in the development process. How many requests per second should the app support? Do you anticipate large spikes in traffic tied to scheduled events (e.g., beginning of school year, end of fiscal year, conferences)?

Expected usage can inform the technology stack and infrastructure decisions in the beginning phases of development. Reliability testing can include load testing and forced failures of the system to test ease and timing of recoverability.


Usability refers to whether end users can or will use the system. It’s important to identify who your users are and assess how they will use the system.

Questions asked and answered while assessing usability are: How difficult is it for users to understand the system? What level of effort is required to use the software? Does the system follow usability guidelines (e.g., comply with usability heuristics and UX best practices, or adhere to a style guide)? Does the system comply with web accessibility standards (e.g., Web Content Accessibility Guidelines or Section 508)?

Conducting usability testing with end users helps uncover usability problems within the system.

Efficiency, maintainability, and portability

Software efficiency refers to the measurement of software performance in relation to the amount of resources used. Efficiency testing evaluates compliance to standards and specifications, resource utilization, and timing of tasks.

Maintainability refers to the ease with which the software can be modified to correct defects, meet new requirements, and make future maintenance easier. An example of poor maintainability might be using a technology that is no longer actively supported or does not easily integrate with other technologies.

Portability refers to the ability to transfer the software from one tech stack or hardware environment to another. The requirements for these three aspects should be discussed by project stakeholders early in development and measured throughout development.

Important notes about quality

The above quality characteristics (functionality, reliability, usability, efficiency, maintainability, and portability) must be individually prioritized for each project, as it is impossible for a system to fulfill each characteristic equally well. Focusing on one aspect may mean making decisions that negatively affect another (for example, choosing to use technologies that make a product highly maintainable may make it much more difficult to port). Frequently, a specific product requires a very narrow focus on one aspect; a tool that has a very small number of users only needs to be usable for them, not the whole gamut of humanity.

Target quality for a product should be discussed among all stakeholders and agreed upon in writing as early as possible. This quality agreement should be stored somewhere easily accessible by all team members and referenced frequently during execution of quality assurance tasks.

There’s an unspoken tenet of software development that says no product can be defect-free. In order for software to be successfully developed and deployed into the wild, it’s important that all parties acknowledge this.

Striving for perfection and a 100% defect-free app will waste time and resources, and ultimately be futile. Similarly, it’s important to recognize that the absence of identified defects does not indicate a product is defect-free; more likely, the absence of defects indicates the product has not been thoroughly tested.

The goal of quality assurance is not to ensure there are no defects in the software, but to ensure that the agreed upon quality level is met and maintained. You should expect that some known defects will be low priority and not fixed before deployment. Additionally, you should expect that some defects will be very high priority and required to fix prior to deployment. Priority of defects should be determined by a combination of the quality agreement, severity of the issue, and stage in the SDLC. We’ll go into more details regarding prioritization of defects in a later post.


(1)Andreas Spillner, Tilo Linz, and Hans Schaefer. Software Testing Foundations, 4th edition.

Caktus GroupManaging Sprint Reviews for Multiple Clients or Projects

Sprint reviews for teams working with multiple clients and managing multiple projects can be a challenge. At Caktus, we combine more traditional sprint review guidelines with some tweaks to fit our company and client’s needs.

Meeting preparation

The morning of the sprint review, our Scrum Master shares the sprint goals with the stakeholders. This reminds stakeholders what we were working on and allows them to decide if what we are reviewing is relevant to them.

Before the sprint review meeting, the team gets together to determine the presentation order and who will present what. As the product owner (PO) for my team, I go through each of the sprint goals, organized by client, and we discuss what will and will not be presented.

Meeting structure

If there are no external stakeholders, the meeting is time-boxed and follows the general flow below to keep things organized and moving forward:

Starting the meeting

The product owner starts the meeting:

  • Sets the stage
  • Introduces attendees (when necessary)
  • States what will and will not be demoed from the sprint

Presenting work

The team presents done work on staging/production, or work that is in QA but not yet completed if there is value in getting feedback on it at this point. Incomplete work is presented with the caveat that it is not done; we do not share any work that is solely on a developer’s local environment.

  • Team members demo the work, individually or jointly
  • The presenting team member discusses any applicable key events, major challenges, and solutions
  • The PO asks for questions and feedback from stakeholders, recording it for later prioritization in the backlog

Discussion of the backlog

Once the demo is complete, the PO leads discussion of the backlog:

  • Review the next highest backlog priorities and projections/release plan (if appropriate)
  • Solicit opinions on those priorities
  • Take into account feedback from the sprint review and re-evaluate the backlog for next sprint planning

When these meetings consist of internal stakeholders or a single client, we go through this script once.

In the cases where the team is working on projects for multiple clients, we break our meetings into half-hour or one-hour chunks. We then go through this script with each client, discussing only their pertinent projects.

Why do it this way?

Following this format gives each project the time required to have a thorough and helpful sprint review, and keep things on track for both the team and the client. It allows the client to see their features come to fruition and gives them the opportunity to ask questions in real time to the developers who do the actual work. It also allows the developers to hear feedback directly from the clients and gives both an opportunity for dialogue. Finally, POs can get a sense of how to start adjusting the backlog for the upcoming sprint.

If you found this helpful, check out these other project management tips.

Caktus GroupBasics of Django Rest Framework

What Is Django Rest Framework?

Django Rest Framework (DRF) is a library which works with standard Django models to build a flexible and powerful API for your project.

Basic Architecture

A DRF API is composed of 3 layers: the serializer, the viewset, and the router.

  • Serializer: converts the information stored in the database and defined by the Django models into a format which is more easily transmitted via an API
  • Viewset: defines the functions (read, create, update, delete) which will be available via the API
  • Router: defines the URLs which will provide access to each viewset A graphic depicting the layers of a Django Rest Framework API.


Django models intuitively represent data stored in your database, but an API will need to transmit information in a less complex structure. While your data will be represented as instances of your Model classes in your Python code, it needs to be translated into a format like JSON in order to be communicated over an API.

The DRF serializer handles this translation. When a user submits information (such as creating a new instance) through the API, the serializer takes the data, validates it, and converts it into something Django can slot into a Model instance. Similarly, when a user accesses information via the API the relevant instances are fed into the serializer, which parses them into a format that can easily be fed out as JSON to the user.

The most common form that a DRF serializer will take is one that is tied directly to a Django model:

class ThingSerializer(serializers.ModelSerializer):
    class Meta:
        model = Thing
        fields = (‘name’, )

Setting fields allows you to specify exactly which fields are accessible using this serializer. Alternatively, exclude can be set instead of fields, which will include all of the model’s fields except those listed in exclude.

Serializers are an incredibly flexible and powerful component of DRF. While attaching a serializer to a model is the most common use, serializers can be used to make any kind of Python data structure available via the API according to defined parameters.


A given serializer will parse information in both directions (reads and writes), but the ViewSet is where the available operations are defined. The most common ViewSet is the ModelViewSet, which has the following built-in operations:

  • Create an instance: create()
  • Retrieve/Read an instance: retrieve()
  • Update an instance (all fields or only selected fields): update() or partial_update()
  • Destroy/Delete an instance: destroy()
  • List instances (paginated by default): list()

Each of these associated functions can be overwritten if different behavior is desired, but the standard functionality works with minimal code, as follows:

class ThingViewSet(viewsets.ModelViewSet):
    queryset = Thing.objects.all()
    serializer_class = ThingSerializer

If you need more customization, you can use generic viewsets instead of the ModelViewSet or even individual custom views.


Finally, the router provides the surface layer of your API. To avoid creating endless “list”, “detail” and “edit” URLs, the DRF routers bundle all the URLs needed for a given viewset into one line per viewset, like so:

# Initialize the DRF router; only once per urls.py file from rest_framework import routers`
router = routers.DefaultRouter()

# Register the viewset
router.register(r'thing', main_api.ThingViewSet)

Then, all of the viewsets you registered with the router can be added to the usual url_patterns:

url_patterns += url(r'^', include(router.urls))

And you’re up and running! Your API can now be accessed just like any of your other django pages. Next, you’ll want to make sure people can find out how to use it.


While all code benefits from good documentation, this is even more crucial for a public-facing API, since APIs can’t be browsed the same way a user interface can. Fortunately, DRF can use the logic of your API code to automatically generate an entire tree of API documentation, with just a single addition to your Django url_patterns:

url(r'^docs/', include_docs_urls(title='My API')),

Where next?

With just that simple code, you can add an API layer to an existing Django project. Leveraging the power of an API enables you to build great add-ons to your existing apps, or empowers your users to build their own niche functionality that exponentially increases the value of what you already provide. For more information about getting started with APIs and Django Rest Framework, check out this talk.

Caktus GroupAdd Value To Your Django Project With An API

How do your users interact with your web app? Do you have users who are requesting new features? Are there more good feature requests than you have developer hours to build? Often, a small addition to your app can open the door to let users build features they want (within limits) without using more of your own developers’ time, and you can still keep control over how data can be accessed or changed. That small addition is called an application programming interface, or API. APIs are used across the web, but if you aren’t a developer, you may not have heard of them. They can be easily built on top of Django projects, though, and can provide great value to your own developers as well as to your users.

What Is An API?

At its core, an API is essentially an interface which allows two pieces of software to talk to each other. This usually refers to a request that reaches across the web to a third-party service, although it can also be used to allow two of your own apps to talk to each other.

Why Would I Want One?

As a user, there are many reasons you might want access to an app’s data. How often do you think “this would be great if they added just one other feature!”

We’d all like to think our apps address all our users’ needs, but there will always be a subset who have a corner-case use that they’d like to implement. If only a few dozen people would use that feature, but you have a lengthy backlog of other features that a more significant number of users would use, then you’re likely to prioritize the features that will help the most people.

With an API, that small subset can write (or hire someone to write) an add-on which gives them their niche feature. Multiply that by the dozens of small niche subsets of users who have different wishlists and you might have a bunch of users who would benefit from just one new feature: an API.

Is It Worth The Cost?

As with many software products, the value proposition depends on the amount of time that will be invested in building the feature, but an API doesn’t have to take much investment! As previously mentioned, an API can be easily layered on top of an existing Django project, so if you have Django apps, you may be closer than you think.

One of the greatest values an API can provide is that users may attach themselves to your product, making it an integral part of their operations. If they only use the features that laid out on your website, then another company can come along and build a competing service that handles all of those functions plus some, or for a lower cost. On the other hand, if they use just 70% of the features you advertise but have integrated your service into their operations by using your API, then they would have to re-write those integrations to move to another service. Suddenly, that API is a really strong reason to stick with your service rather than hop to the newest player in the field.

Getting started

If you don't have an in-house development team to help with an API, the work can be contracted out to a web development company like Caktus. Contact us to start developing an API for your Django project.

Philip SemanchukA Python 2 to 3 Migration Guide

July 2018 update: I’ll be giving a talk based on this guide at PyOhio next week. If you’re there, please come say hello!

It’s not always obvious, but migrating from Python 2 to 3 doesn’t have to be an overwhelming effort spike. I’ve done Python 2-to-3 migration assessments with several organizations, and in each case we were able to turn the unknowns into a set of straightforward to-do lists.

I’ve written a Python 2-to-3 migration guide [PDF] to help others who want to make the leap but aren’t sure where to start, or have maybe already begun but would like another perspective. It outlines some high level steps for the migration and also contains some nitty-gritty technical details, so it’s useful for both those who will plan the migration and the technical staff that will actually perform it.

The (very brief) summary is that most of the work can be done in advance without sacrificing Python 2 compatibility. What’s more, you can divide the work into manageable chunks that you can tick off one by one as you have time to work on them. Last but not least, many of the changes are routine and mechanical (for example, changing the print statement to a function), and there are tools that do a lot of the work for you.

You can download the migration guide here [PDF]. Please feel free to share; it’s licensed under a Creative Commons Attribution-ShareAlike license.

Feedback is welcome, either via email or in the comments below.


Caktus GroupUX Research Methods 3: Evaluating What Is

In previous blog posts on UX research methods, I discussed techniques we use to understand how users think and feel, what they need and want, and why; and those we use to analyze and understand user behavior.

Another group of techniques frequently included in UX research methods does not involve a direct study of users, but rather an evaluation of the landscape and specific instances of existing user experience.

Competitive Landscape Review

Competitive landscape review is typically done as a qualitative (generative or evaluative) study of a small sample of direct and indirect competitors. Direct competitors are companies that offer the same, or a very similar, value proposition to the same customer segment that our client serves. Indirect competitors are companies that offer a similar value proposition to a different customer segment from that served by our client.

During a competitive landscape review, we look at three to five direct competitors and no more than three indirect competitors. For each competitor, we analyze:

  • Market positioning
  • How long they’ve been on the market
  • What the delivery method of their software is
  • Who their primary user segments are

We also look for reviews of the competitors’ products to better understand what their users like and what they don’t like. Finally, we create a feature matrix to compare key features across competitors’ products, and identify windows of opportunity for our client.

Content Audit

A content audit is performed as a qualitative, evaluative research method that can be employed to better understand the current state of an existing application or website. It is most relevant in the case of content-heavy, marketing websites that need redesign.

Content auditing is a process of creating and evaluating an inventory of all content and assets on a website, including recording content structure and relationships between content blocks. It may also include an analysis of the vocabulary used as part of the user interface in order to assess its quality and consistency. It is a great tool to employ ahead of a content modeling discovery workshop.

UX Review

A review of user experience of an existing website or application is a qualitative, evaluative method that allows the reviewer(s) to analyze the current state through one of the following approaches.

Heuristic Evaluation

This type of review compares the current state of a website or an application to an established set of usability heuristics (best practices or rules-of-thumb), and identifies where the current state falls short in terms of its adherence to those heuristics.

The best known and most widely-used set of heuristics was developed by Rolf Molich and Jacob Nielsen and has become an industry standard. Because this type of review relies on an established standard, it can be performed by anyone who has access to that standard. It is also recommended that a heuristic evaluation is done by more than one reviewer.

Expert Review

An expert review does not have to rely strictly on a prescribed set of heuristics. The set of best practices an expert review references may be broader or narrower. Some websites or applications may require an approach that does not adhere to all standard heuristics.

For example, Nielsen’s heuristics stipulate “aesthetic and minimalist” design. While that guideline is considered the best practice for many types of application, it may not apply to some. In games, the experience relies heavily on a very rich (and certainly not minimalist) aesthetic. Because an expert review affords more flexibility than a heuristic evaluation, it should be performed by someone with an expertise in UX best practices. It can be done by one expert.

Selecting UX Research Methods for a Project

Evaluating the current user experience is done at the onset of a project. For a new project, or a redesign of an existing project, conducting a competitive landscape review can provide insights into user experience solutions already on the market and opportunities for innovation. A redesign project will benefit from taking stock of what is:

  • What are the structure and components of the existing content?
  • Does the design and experience of the current website adhere to established heuristics (rules-of-thumb)?
  • What best practices are currently not, but should be implemented in the redesigned application?

As is the case with other UX research methods, not all techniques listed in this group have to be employed on a single project.

If you’re not sure what UX research will benefit your project most, get in touch. We can help.

Caktus GroupUX Research Methods 2: Analyzing Behavior

Previously, I explained interviews, surveys, and card sorting as techniques that help UX researchers understand how users think and feel, what they need and want, and why. In this post, I will review UX research methods best suited to understand user behavior and its causes.

As mentioned before, there exist many UX research methods, but not all of them have to be employed on any given project. The exact selection of techniques depends on the specific needs of a project, its budget, and timeline.

Usability Testing

By usability testing, we specifically mean an evaluative, behavioral research method that consists of observing users (directly or indirectly) while they complete specific tasks on a website or within an application. At Caktus, we conduct qualitative usability testing during which we observe the user’s interactions with a website or an application.

It’s worth noting that usability testing can be undertaken with different goals in mind:

  • As a formative study to evaluate the current state of usability of a website, ahead of a redesign.
  • As a summative study to evaluate the final state of a feature or a website at the end of a project (or a development cycle).
  • As a formative assessment of a competitor's website or application to understand what usability problems exist and should be avoided.

Moderated usability testing

Moderated usability testing is a study moderated by a Caktus UX designer. It can be done in person on-site, or remotely by leveraging a third-party platform that allows us to connect with the user over the internet, have them share their screen, observe as they complete the tasks they’re presented with, and record the entire session. The platform also allows other observers to join in remotely, a great way for the client stakeholders to gain a direct insight about their product.

Unmoderated usability testing

Unmoderated usability testing is conducted with the help of a third-party platform that allows us to create tasks, deliver them to the user along with a link to the website or application under evaluation, and record the session during which the user is completing the tasks. We can then evaluate the recording and analyze the findings in order to issue recommendations.

On-site observation

On-site observation is a qualitative study that can result in behavioral or attitudinal insights. When done as generative research, it consists of observing users during their daily, work routines in order to better understand how they work, what their needs and pain points are, etc. When conducted as evaluative research, it means observing users completing tasks within an application in order to identify usability problems. The latter may seem similar to usability testing. There is, however, an important difference between the two approaches.

In usability testing, the participants are novice application users (users who had not used the application before) and the researcher provides them with tasks that imitate real-world scenarios. In an on-site observation, the researcher observes people who use the application in their work. Users walk the researcher through their workflows in the application, pointing out what’s working and what’s not working. The researcher gains insights that are not only behavioral (representing what users do while interacting with the application), but also attitudinal (representing what people think and say, what their opinions are).

Treejack testing

Treejack testing is a qualitative or quantitative (depending on the participant sample size), evaluative method that allows us to assess how well information architecture and/or a navigation design pattern aligns with the users’ mental model. It consists of asking users to find labels representing content items within a tree-like model of information architecture or the navigation. At Caktus, we conduct treejack-testing with the help of a third-party service. It allows us to measure not only the success and failure rates, but also to see the path a user takes to locate each content item.

First-click testing

First-click testing is typically a quantitative, evaluative, behavioral method, in which users are presented with static images of an interface (either screenshots or high fidelity mockups) and asked to complete tasks by clicking on what they interpret as interactive elements of the interface, e.g., links or buttons. The premise of this approach is founded in a 2009 study (3), which showed that the user’s first click is a good indicator of a successful completion of a task. In other words, if the user’s first click is correct, they’re more likely to find what they’re looking for than if their first click is incorrect. When done with a large sample of participants, results of first-click testing are a good predictor of usability of the UI elements being tested.

At Caktus, we have used first-click testing as a qualitative method in an iterative series of tests that include card sorting, treejack testing, and first-click testing. In this approach we employ first-click testing in a way similar to treejack testing, as a method to assess the efficacy of a design that resulted from card-sorting. We leverage a third-party platform to perform first-click testing.

Analytics Review

Analytics review is a quantitative, behavioral, evaluative research method. We use it to supplement the qualitative research we do. While a source of valuable data, analytics on its own does not necessarily deliver answers to questions about the quality of user experience or about usability. In combination with qualitative methods, however, it can enhance the process of diagnosing existing problems and improving user experience.

Analytics review consists of reviewing a set of metrics that an application’s or website’s analytics tool captures, e.g.

  • paths users take to reach certain content, sources of incoming traffic;
  • keywords used to find the content of interest;
  • events (or user interactions) on a page e.g., clicks, downloads, etc.;
  • conversion rates;
  • time spent on a page;

and more. In addition, reviewing a website’s search logs can be an insightful source of information about content users frequently look for or are not finding by means of the website’s main navigation.

Selecting UX Research Methods for a Project

The research methods we employ to analyze and understand user behavior can be helpful at any stage of a project.

We may begin a redesign project with:

  • Analytics review to gain insights about user behaviors on the current website or in an application
  • Usability testing of the current website to uncover existing usability problems
  • Competitive usability testing to reveal which digital experiences work well and which do not
  • On-site observations of users with or without the technology the project is concerned with

We may test initial designs for the project by conducting:

  • Treejack testing
  • First-click testing
  • Usability testing

And we monitor the usability of the implementation by conducting moderated or unmoderated usability testing.


For further reading, I suggest the following:

  1. UX Research Cheat Sheet, Susan Farrel, Nielsen Norman Group
  2. When to Use Which User-Experience Research Methods, Christian Rohrer, Nielsen Norman Group
  3. Bailey R.W., Wolfson C.A., Nall J., Koyani S. (2009) Performance-Based Usability Testing: Metrics That Have the Greatest Impact for Improving a System’s Usability. In: Kurosu M. (eds) Human Centered Design. HCD 2009. Lecture Notes in Computer Science, vol 5619. Springer, Berlin, Heidelberg

Caktus GroupUX Research Methods 1: Understanding Thought Processes, Motivations, and Needs

In a previous blog post, Types of UX Research, I discussed how UX research can be classified. I explained qualitative and quantitative, generative and evaluative, formative and summative, and attitudinal and behavioral types of research. Within each of these categories of research, there are several methods that can be used to reach specific project objectives.

It is good to have a range of research methods at one’s disposal, but it’s not necessary to use them all. Particular project needs, the project budget, and the project timeline are all factors that must be taken into account when deciding on which methods to use. Below I discuss specific techniques we use at Caktus to understand users’ thought processes, motivations, and needs.


Interviews are a qualitative, attitudinal, generative research method typically used at the onset of a project. They are a great way to gather information ahead of a discovery workshop. They can also be conducted after a discovery workshop to help fill in knowledge gaps discovered during the workshop.

User Interviews

We talk to users to gain insights about who they are; what needs, wants, and pain points they have; in what contexts they operate; what their mental models are, etc. User interviews help us understand the user goals and outcomes that the application we are building must support, and are a basis for developing personas that guide the design and development process. Recruitment of participants for user interviews is done with the help of the client or through a third-party recruiting service that allows us to screen potential participants and select a well-matched target group.

Stakeholder Interviews

While understanding user needs and goals is paramount to requirements gathering, understanding business goals is equally important. Business goals should encompass user goals, but they are voiced from the perspective of the business. We learn about business goals, as well as the client’s perspective on user needs and pain points, by talking to client stakeholders.


Surveys are primarily used as a quantitative research method for generative or evaluative purposes. They allow us to collect information from larger groups of respondents and generally result in numeric data. They can also be administered to collect qualitative data through open-ended questions. When used as a generative tool, a survey can inform a discovery workshop or used to fill in knowledge gaps after the workshop. When used as an evaluative tool, a survey can be administered as formative research to evaluate an initial state of an application, or as summative research to assess the final or near-final state of an application.

Card sorting

Card sorting is a qualitative or quantitative (depending on the participant sample size), generative method often used to refine the information architecture of an application or website and to gather insights on which to base navigation design. In this type of study, participants are asked to group items (cards) representing the website’s content into categories that make sense to them. If names of the categories are provided by the researcher, the approach is called closed card sorting. If users are asked not only to categorize items, but also to create and name their own categories, the approach is called open card sorting. A mixed approach (with some categories pre-determined by the researcher, and some left to the participants to create) is called hybrid card sorting. At Caktus, we conduct remote card sorting studies via a third-party platform.

Selecting UX Research Methods for a Project

Interviews, surveys, and card sorting are all methods particularly useful at the onset of a project, although they could also be employed at later stages if clarification of requirements is needed. They help us understand how users think and feel, what they need and want, and why. Based on that understanding, we are better prepared to design a solution that delivers value for the target user segment.

At Caktus, we tailor the selection of research methods to project’s objectives. If understanding users’ needs in quantitative terms is necessary, for example if it is paramount to have confidence that a majority of users display a particular preference or need, a survey is a great tool. If we want to understand why users display a particular preference or need, or how they think about their day-to-day tasks, interviews are the technique of choice. And to understand how users categorize content that they seek or interact with, we conduct card sorting. On any project, best results are obtained with a combination of UX research methods.

Have a project in mind? We can help you decide where to start and what UX research methods to leverage to give your project the best possible starting point.

More Resources

  1. “UX Research Cheat Sheet”, Susan Farrel, Nielsen Norman Group
  2. “When to Use Which User-Experience Research Methods,” Christian Rohrer, Nielsen Norman Group
  3. “Complete Beginner’s Guide to UX Research”, UX Booth
  4. “7 Great, Tried and Tested UX Research Techniques”, Interaction Design Foundation

Caktus GroupTypes of UX Research

Requirements gathering (or product discovery) is a part of every development project. We must know what to build before we build it, and we must refine our understanding of what we are building as we move along. Discovery workshops are a format well-suited for certain types of projects before development begins, although requirements gathering continues throughout a development project.

Whether conducted at the onset of a project or throughout the development effort, product discovery must be informed by insights and data.

This is the first of four blog posts devoted to conducting research in the context of user-centered design and development. In this post, I will look at the reasons for doing research and the types of research at our disposal. In the next blog posts, I will present and explain the specific user experience (UX) research methods we favor at Caktus.

Reasons for Doing Research

In user-centered application design and development, research is done in order to:

  • Learn who the users are, what they do, how they work, how they feel and think.
  • Describe context(s) in which users operate with and without the technology we’re building.
  • Understand user goals, needs, wants, and pain points.
  • Understand user mental models.
  • Learn how users accomplish tasks in the context of an application as well as independently of any technology.
  • Find out what experiences competitors are building and how those experiences work for users.
  • Gather information necessary to define information architecture and content structure.
  • Test assumptions made about the users, their contexts, and their interactions with the application we’re building.
  • Identify where the application fails to support user outcomes or what needs to be done to support them.
  • Analyse usage patterns of an existing application.
  • Analyze users’ behavioral patterns with regard to the technology under consideration.

Because of its emphasis on users, we call this type of research UX research.

Types of UX Research

UX research can be classified in a variety of ways. It’s helpful to be familiar with these classifications in order to understand what type of research can be applied when and for what purpose.

Quantitative vs. Qualitative Research

The classification of research into quantitative and qualitative is based on the type of methodology involved.

Quantitative research is used to measure user behavior and helps answer the what, how much, and how many types of questions:

  • How many pages does a user navigate to during a visit?
  • With what frequency are users accessing the application on certain devices?
  • How many new and how many returning visitors does the application have per a time period?
  • How much time do users spend on a given page?
  • What is the distribution of keywords that users search for?
  • How many searches for a given keyword have been run in a period of time?
  • How many conversions occur on version A of the page, and how many on version B?

When done with a large enough sample of participants, quantitative research can deliver statistically significant results.

Qualitative research is done to describe user behavior and can be conducted with smaller samples of users. It results in descriptive outcomes that help understand the nuances of user contexts, behaviors, and interactions with technology. It seeks to understand the why of users’ actions:

  • Why are users spending more time on this page than on the other page?
  • Why are users converting better on version B of the page?
  • Why do people fail to complete a task?
  • Why are users frustrated by this feature?
  • Why do people need that feature?
  • Why do users have trouble understanding how to use the application?

While many people favor quantitative research, it is worth noting that some insights can only be found through qualitative research.

Quantitative and qualitative research work best when done in tandem. Both types of research can be employed at the onset of and throughout a project

Generative vs. Evaluative Research

The classification of research into generative and evaluative is based on the intention with which research is conducted.

Generative research is done to generate information about the users and ways in which they operate. It involves learning about who the users are, what they do, how they do it, why they do what they do in a particular way, what frustrates them, what makes them happy, in what contexts they take an action, etc.

Generative research helps define the problem under consideration. The bulk of generative research is done at the beginning of a project, but it can continue at a smaller scale throughout the project if the problem requires further clarification.

Evaluative research is done to assess something that exists, e.g., a design or an application. The types of questions that evaluative research can help answer include:

  • Is the design solving the problem for users?
  • How is the application performing?
  • Can users complete tasks easily?
  • Which features are a source of frustration?
  • Where and when are users unable to complete tasks correctly, and why?
  • What works great, what does not, and why?

Evaluative research can be conducted at any time throughout the project as long as there is something to evaluate. Early sketches, paper or digital prototypes, and implemented interfaces can all be subject to evaluative research.

Quantitative, qualitative, or a combination of these methods can be used in either generative or evaluative research.

Formative vs. Summative Research

Formative and summative research are types of evaluative research. The difference between them lies in when in a project they are conducted and for what purpose.

Formative research is typically done at the onset of a project or development cycle to assess the current state of a feature, a website, or an application. It helps identify problems to be solved (for example, pain points the users experience when interacting with an application).

Summative research is a process of evaluating the final or near-final state of a feature, a website, or an application at the end of a project or development cycle. It helps evaluate whether a design, feature, or application/website meets the user goals. If a project or development cycle started with formative research, the results of summative research can be compared to those of the formative research in order to measure success or progress.

Quantitative, qualitative, or a combination of methods can be used in either formative or summative research.

Attitudinal vs. Behavioral Research

Attitudinal and behavioral research derive classification from the nature of the obtained information.

Attitudinal research is about what people say. By learning what people say, we gain insight into what they think, feel, and want.

On the other hand, in behavioral research we watch what people do. By watching user actions, we can determine what they need to reach the desired outcomes, catch a glimpse of the mental models they bring into their interactions with technology, and understand what needs to be done to align the technology with users’ mental models.

Users are people, and people are not fully self-aware. Unconscious mental processes occur faster than conscious ones and as a result, people may make decisions and choices without fully knowing why. For that reason, simply listening to what people say (as we do in attitudinal research) may not be sufficient to understand requirements thoroughly. Watching users complete tasks is often necessary to understand what they need and expect from technology we’re building.

Coming Up Next: UX Research Methods

It is helpful to understand the various types of UX research available to us to fully appreciate the value of research in user-centered application design and development. In the next blog post, I will discuss the specific UX research methods we use at Caktus to inform requirements gathering for the projects we build.

Caktus GroupQuick Tips: How to Find Your Project ID in JIRA Cloud

Have you ever created a filter in JIRA full of project names and returned to edit it, only to find all the project names replaced by five-digit numbers with no context? The trial and error approach (deleting and restoring numbers one by one until the project you wanted to remove no longer appears in the filter results) is painful. So, how do you find the ID for a project?

Previous version of JIRA

Step 1. As an admin user, select the gear to open the admin dropdown and select Projects under JIRA Administration. The admin dropdown in the previous version of JIRA.

Step 2. Select your project from the list.

Step 3. Once on the project summary page, select Details on the left.

Step 4. The project ID appears at the end of the URL. The project ID can be found at the end of the URL.

On the new JIRA experience:

Step 1. As an admin user, select Projects from the left nav. The left navigation menu in the new JIRA experience.

Step 2. Select your project from the list.

Step 3. Once on the project page, select Settings at the bottom of the project nav. The project page nav in the new JIRA.

Step 4. The project ID appears at the end of the URL.

Happy filtering! For more JIRA tips check out our previous post on how to change your name in JIRA.

Philip SemanchukSetuptools Surprise


I recently tripped over my reliance on a simple (and probably obscure) feature in Python’s distutils that setuptools doesn’t support. The result was that I created  a tarball for my posix_ipc module that lacked critical files. By chance, I noticed when uploading the new tarball that it was about 75% smaller than the previous version. That’s a red flag!

Fortunately, the bad tarball was only on PyPI for about 3 minutes before I noticed the problem and removed the release.

I made debugging harder on myself by stepping away from the project for a long time and forgetting what changes I’d made since the previous release.


In February 2014, I (finally) made my distribution PyPI–friendly. Prior to that I’d built my distribution tarballs with a custom script that explicitly listed each file to be included in the tarball. The typical, modern, and PyPI–friendly way to build tarballs is by writing a MANIFEST.in file that a distribution tool (like Python’s distutils) interprets into a MANIFEST file. A command like `python setup.py sdist` reads the manifest and builds the tarball.

That’s the method to which I switched in February 2014, with one exception—since my custom script already contained an explicit list of files, it was easier to write a MANIFEST file directly and skip the intermediate MANIFEST.in. That works fine with distutils.

I released version 1.0.0 of posix_ipc in March of 2015, and haven’t needed to make any changes to the code until just now (the beginning of 2018). However, in February 2016, I made a small change to setup.py that I thought was harmless. (Ha!)

I added a conditional import of setuptools so that I could build wheels. (Side note: I really like wheels!) The change allows me to build posix_ipc wheels on my laptop where I can ensure setuptools is available, but otherwise falls back on Python’s distutils which works just fine for everything else I need setup.py to do, including installing from a tarball. The code looks like this —

    import setuptools as distutools
except ImportError:
    import distutils.core as distutools

The Problem

Just a few days ago, I released a maintenance release of posix_ipc, and it was then I noticed that the tarballs I built with my usual python setup.py sdist command were 75% smaller and missing several critical files. Because it had been 23 months since I made my “harmless” change to setup.py, the switch from using distutils to setuptools wasn’t exactly fresh in my mind.

However, some examination of my commit log and a realization that this was the first release I’d made after making that change gave me a suspicion, and grepping through setuptools‘ code revealed no references to MANIFEST, only MANIFEST.in.

There’s also this in the setuptools documentation, if I’d bothered to read it—

[B]e sure to ignore any part of the distutils documentation that deals with MANIFEST or how it’s generated from MANIFEST.in; setuptools shields you from these issues and doesn’t work the same way in any case. Unlike the distutils, setuptools regenerates the source distribution manifest file every time you build a source distribution, and it builds it inside the project’s .egg-info directory, out of the way of your main project directory.

So that was the problem—setuptools doesn’t look for a MANIFEST file, only MANIFEST.in. Since I had the former but not the latter, setuptools used its defaults instead of my list of files in MANIFEST.

The Solution

This part was easy. I converted my MANIFEST file to a MANIFEST.in which works with both setuptools and distutils. That’s probably a more robust solution than the hardcoded list in MANIFEST anyway.

I’m pleased that posix_ipc has been stable and well-behaved for such a long time, but these long breaks between releases mean a certain amount of mental rust has always accumulated when it’s time for the next one.

By the way, the source for posix_ipc is now hosted on GitHub: https://github.com/osvenskan/posix_ipc

Caktus GroupCulture of Unit Testing

Unit testing is something that deeply divides programmer communities. Nearly everyone agrees that it’s good to have unit tests in place, but some developers question whether the time invested in writing unit tests would be better spent writing “real” code, doing manual QA, or debugging.

In practice, it's a good use of time and should be standard in any company which takes pride in its end product.

Real-world examples

On one project, we self-enforced a requirement that at least 90% of our code is covered by unit tests at any given time. We automated this so that, if our code drops below that level, it won’t be merged into the main codebase until enough tests have been written to bring it back up. This ensures that tests are written as code is written, avoiding the monstrous task of writing tests for an already-massive codebase which has no tests yet.

There have been times when we have been on the verge of not finishing a task within the time we had planned and tests haven’t been written for that code yet. It’s extremely tempting in that situation to skip test-writing. In a company that values deadlines over quality, such tests would likely be skipped, but we’ve made a different choice at Caktus. I think it’s the right one.

At least a couple times a month I find myself writing tests for the code I’ve just written and realizing that I had omitted a check for an edge case. These usually take little time to fix. Writing tests can also help me think about how the code should be structured, particularly encouraging me to make it more modular. Not only does that increase readability, but it also can make it easier to update later as requirements change.

When I think about tests, I automatically go straight to the edge cases. A manual QA process may or may not catch problems with rare or unusual inputs and it can take a lot of time to manually test numerous edge cases. But having written an automated test, I ensure that the edge case continues to be handled according to the client’s specifications.

These same unit tests made a large refactoring process much easier. Going into the process, I knew that the change I was making would require compensatory changes in dozens of other places in the code, and I’m sure I would have eventually located all of the places it needed to change anyway. But, since we already had thorough test coverage, I was able to make the initial change, run the test suite, and use the test failures to know where I needed to make changes in the existing code. I also knew when I was done because all the tests were passing again. One final scan through the code confirmed that I hadn’t missed anything, and subsequent real-world tests have confirmed that everything seems to be working fine. Because of the attention to tests throughout the process, the client could be assured of a consistently high-quality product with very few bugs in less time than it would take without the tests.

Establishing a culture of testing

The first step in establishing testing as a standard part of the coding process is simply to measure it. Plenty of tools are available to measure your testing and get reports on what’s being covered and what isn’t. The best starting point is to use coverage, which will tell you how much of your code is being executed by your existing tests. As Caktus chose to do in the above example, a minimum coverage level can be set which must always be maintained, which works great when implemented at the start of a project and adhered to consistently.

If trying to add testing to existing code, the same principle can be applied with some minor tweaks. Unless you have the luxury of putting a hold on new code while tests are written (unlikely!), you will probably need to gradually add tests. The most reasonable way to do this is to either set goals for coverage or to impose a requirement that the coverage must always go up (until it reaches a reasonably high level).

Regardless of the application, if unit tests are consistently expected, the team will get faster and better at implementing them.

Code maintenance

It’s often asserted that a test suite is simply more code to maintain. While technically true, tests, once written, should only need to change if the requirements also change. This means that the tests should not need to be tweaked constantly. When they do need to be tweaked, that also helps streamline the process of finding the code that needs to change. Most of the time, the tests will sit untouched and do their job, asserting that all of the code is working as expected, with no maintenance required. When a test needs to be changed, it is again doing its job, pointing to code that is involved in changing requirements. No test should be changed just because it fails. A failing test tells you that either the requirements (and therefore code) changed, or that the test was not written correctly in the first place.

False sense of security?

One drawback of unit tests is that they can make you feel like everything is working great, and reduce motivation to do real-world testing. While unit tests make a great first pass over the code, there is no substitute for genuine QA. The tests should make the QA process go faster, as some of the more obvious bugs will be found before any manual testing happens, but QA will always still be needed. Even if a codebase has 100% coverage, there’s no guarantee that something hasn’t been missed. A bug in a test can easily disguise a bug in the code.

Reflections on testing

It took a not-insignificant amount of time for me to get the hang of writing unit tests when I was new to the concept, but my learning time has been more than made up for by the time those same tests have saved me. Testing is now second-nature to me, and I can write unit tests in no time when I am testing code I’ve just written. It only takes a few extra minutes and it so often catches errors or assists in later coding that I can’t imagine not taking the time to write tests from the beginning.

Certainly, tests need to be fairly comprehensive in order to gain all these benefits, but even a small test suite can be helpful and test coverage can be increased bit by bit if tests are written with every new pull request. We have made concerted efforts to establish test coverage on existing, untested code before, and that’s great if you have the time. If not, though, just remember that some is better than none, and increasing is better than stagnating.

Next steps

If you want to work on increasing emphasis on tests in your own projects, here are some strategies to think about:

  • Practice writing tests for every bug fix or new feature (better yet, before starting on them!)
  • Get in the habit of running test suites frequently
  • Implement a policy that every pull request should include a test for the feature or bug being worked on
  • Implement a policy that code coverage should not go down on any pull request
  • Run mutation testing to find places where coverage is fine, but results of the executed code are not actually being tested

Ready to get started? Read more about testing and code quality.

Caktus GroupCaktus Blog Best of 2017

With 2017 now over, we highlight the top 17 posts published or updated on the Caktus blog this year. Have you read them all?

  1. Using Amazon S3 to Store your Django Site’s Static and Media Files: Our most popular blog post was updated in September 2017 with new information. Learn how to use Amazon S3 to serve static and media files and improve site performance.
  2. A Production-ready Dockerfile for Your Python/Django App: Docker provides a solid way to containerize an app. This blog post includes a Dockerfile ready for your use plus instructions on how to use it in a project.
  3. Python Type Annotations: Type annotation support in Python helps developers avoid errors. Read this post for a quick overview on how to use them.
  4. Digging Into Django QuerySets: Learn how to use the Django shell with an example app to perform queries.
  5. Hosting Django Sites on Amazon Elastic Beanstalk: We use AWS Elastic Beanstalk for deploys and autoscaling. This post introduces the basics and how to use it with Python.
  6. SubTests are the Best: Good tests are important to good code, but what makes a good test? Three factors are detailed in this post, which was also presented as a talk at PyOhio 2017 and can be watched on YouTube.
  7. Writing Unit Tests for Django Migrations: Another all-time top blog post which received an update this year, with a walkthrough demonstrating how to write thorough tests for multiple versions of Django.
  8. Managing Your AWS Container Infrastructure with Python: Introducing CloudFormation and Troposphere as tools to host and manage Python apps on AWS.
  9. New Year, New Python: Python 3.6: Highlights from the Python 3.6 release, including secrets, new string interpolation methods, variable type annotations, and more.
  10. Advanced Django File Handling: Customize Django’s file handlers for more flexibility. This post shows you how.
  11. 5 Ways to Deploy Your Python Web App in 2017: Part of our PyCon 2017 Must See Series, this summary also includes the video of the talk at PyCon. Take a look at a live app deployment with ngrok, Heroku, AWS Lambda, Google Cloud Platform, and Docker.
  12. Python Tool Review: Using PyCharm for Python Development - and More: One of our developers reviews the PyCharm IDE for Python. Learn more about how it’s used at Caktus in this interview with our developers (from JetBrains).
  13. Opening External Links: Same Tab or New?: An exploration of the debate around how external links should open, with perspectives from marketing, UX, web development, and users.
  14. Building a Custom Block Template Tag: A walkthrough of how to build a block tag, with references to relevant Django documentation.
  15. 3 Reasons to Upgrade to the Latest Version of Django: For business stakeholders new to website development, we offer three reasons why upgrading the technology behind the site should be considered a necessity.
  16. From User Story Mapping to High-Level Release Plan: The user story map created as part of a discovery workshop is an excellent tool to use in writing the first release plan for a development project. Find out why in this post.
  17. How to Make a jQuery: Recreate the most helpful parts of jQuery to learn how to develop without it.

Going into 2018

What were your favorite posts? What topics did you find most interesting or helpful? What are you hoping to learn about in 2018? Let us know in the comments or on Twitter what you’d like to see more of in the coming year.

Caktus GroupSouthern Fried Agile 2017 Recap

I attended the Southern Fried Agile conference in November 2017, where I heard some excellent talks and connected with local Agilists in Charlotte, NC. Southern Fried Agile is the sister conference of TriAgile, which I also attended this year.

The keynote address by Rich Sheridan, CEO of Menlo Innovations and author of Joy, Inc., set the tone for the day. He inspired the audience by describing the Agile culture and mindset of his company. I took away some innovative ideas from this talk, including: rigorous pair programming that rotates partners every week; demos where the customer uses the software that was built while the team observes and gathers feedback; a culture of minimal meetings that makes use of the open space for constant communication, effectively reducing the need for meetings; and stakeholder prioritization techniques that make use of physical size of pieces of paper to represent level of effort. The picture he painted of the company culture was both memorable and aspirational, and I hope to see more of these examples in the future of Agile.

The most interesting talk I heard was by Sally Elatta, president of Agile Transformation Inc., on "Scaling Agile Metrics and Measuring What Matters." Her presentation emphasized that agility starts at the top of an organization. An Agile transformation that is dictated rather than demonstrated will suffocate teams. A healthier culture is produced when company leadership sets the example and participates in agility. This resonated with me and helped me understand how Agile concepts and techniques can be applied outside of development teams. The talk focused on a system of metrics for Agile measurement at the team, program, and business levels, which I look forward to trying!

Another enlightening talk was "Overcoming Resistance - How to Engage Developers in Agile Adoption" by David Frink from Ipreo. He outlined reasons that developers may not feel engaged with Agile, as well as signs of non-engagement. Using the elephant and rider metaphor (where the elephant represents a person’s emotions, passion, fear and the rider represents logic, analysis, planning), the talk provided ways to motivate both the elephant and the rider. He also explained why it's essential to address the two together. Some methods are:

  • Putting the developers in touch with their users with tools like usability studies, to build a sense of empathy
  • Giving them goals and challenges instead of predetermined solutions, so they can use their creativity to produce the best solutions
  • Protecting their focused time to let them maximize flow (time “in the zone”)
  • Uncovering resistance with techniques like Fist of Five
  • Giving positive feedback to reinforce and build upon Agile behaviors

I also heard Rob English from CapitalOne talk about "Leading a Scrum Master Evolution," making a strong case for Scrum Masters to move in a more technical direction and build more domain knowledge; "Gain Organizational Efficiencies with Kanban" by Yvonne Kish, outlining the benefits of Kanban throughout multiple areas of an organization (delivery, portfolio, and business levels); "Minimum Viable Process" by Nick Smith from Fidelity, describing his team's Scrum culture; and finally "Motley Crews: Lives & Deaths of Cinematic Teams" by James Collins from Wells Fargo, featuring movie clips about teams and their evolution.

The larger themes from this year’s conference were a renewed emphasis on building and supporting autonomous teams, minimizing process to be as lightweight as possible, and a focus on using empirical data to inspect and adapt at multiple levels. Events like this help bring me back to the spirit of Agile when I get too bogged down in the day-to-day. They are also an excellent way to network and hear new ideas! The conference delivered high value for an affordable registration fee and I would recommend it to anyone working in development in or around North Carolina.

Caktus GroupYear-End Charitable Giving 2017

Twice a year we solicit proposals from the team for contributions to non-profit organizations in which individual Cakti are involved or that have impacted their lives. Our charitable giving program is a chance to support not only our own employees but the wider community. This quarter we are pleased to donate to the following organizations.

St. John Rescue and Unidos Por Puerto Rico

Logos for charitable organizations St. John Rescue and Unidos Por Puerto Rico

Hurricane relief was in the forefront of our employees’ minds this season. Though storms Maria and Irma hit several months ago, inhabitants of these U.S. territories are still struggling to recover from the devastating effects.

St. John Rescue provides emergency rescue and medical support along with equipment and supplies. They formed in 1995 with the goal of providing improved response services on the island and have been crucial in providing storm relief and emergency assistance.

Unidos Por Puerto Rico is a new initiative formed, organized, and administered by and for Puerto Ricans to provide direct aid in the wake of the year’s storms. One hundred percent of the organization’s proceeds go to helping victims affected by these natural disasters.

Triangle, NC Organizations

Note in the Pocket provides clothing to children identified by various schools and social service agencies as impoverished or homeless and in need of clothing to wear to school.

InterAct works to end domestic and sexual violence in Wake County. They provide a 24-hour crisis line, community outreach programs, court advocacy, an emergency shelter, individual and group counseling, sexual assault services, and youth education and prevention services.

Code the Dream seeks to build a gateway to the tech sector for minority and immigrant youth by offering free coding programs and classes. They also offer a unique chance for their students to gain real world experience by partnering with local businesses and organizations to work on professional projects serving community needs.

Alley Cats and Angels is an all-volunteer, foster home-based, cat rescue dedicated to helping stray, abandoned, and feral cats. Ultimately, this organization seeks to reduce the overall number of homeless cats in the Triangle through their adoption, barn cat, and spay/neuter assistance programs. Foster litters from Alley Cats and Angels regularly come to the Caktus office for socialization and several Cakti have ended up adopting kittens they met through this program!

Daryl Riethof with kitten

Supporting the Arts

WCPE Radio the Classical Station is a non-commercial, independent, listener-supported station dedicated to excellence in classical music broadcasting. In addition, they provide grants supporting classical music education in North Carolina.

The Carrack empowers local artists by providing professional exhibit and performance opportunities in an volunteer-run, zero-commission space located in downtown Durham, North Carolina. They have been essential to the movement for a rejuvenated arts scene in Durham, especially through their efforts to support emerging, experimental, and/or minority artists as well as hosting and funding inclusive events and projects.

Looking Forward

We have administered our Charitable Giving Program since 2014, but it feels especially meaningful around the holidays, encouraging us to look forward at how we might make a difference in the new year. The program also allows us another opportunity to practice and live our values of fostering empathy and supporting our community.

Caktus GroupSupercharging your CSS with Stylus and PostCSS

Here at Caktus the front-end team stays on the bleeding edge by taking advantage of the latest and greatest tools. We only incorporate features into our packaging that are well-supported and production-ready, as well as those that meet our list of standard browser requirements. Luckily, there are plenty of tools that allow us to use experimental technologies with appropriate fallbacks for non-supported browsers.

Getting Started

Our front-end packaging includes npm and gulp to bundle CSS files differently based on our working environments. It is a good idea to separate local development and production environment pipelines in order to optimize each environment. In our package.json file, we use two scripts: dev and build.

"scripts": {
   "build": "./node_modules/.bin/gulp deploy",
   "dev": "./node_modules/.bin/gulp"

Dev is used when the project is run on a local development environment. We use tools like sourcemapping, watchers to track when specified files have changed, and livereload to auto refresh browsers when specific triggers are detected.

Our build script is used for staging and production environments. It is set up to concatenate and minify source files into one CSS file that gets served to the client. Both scripts do a fair amount of preprocessing and postprocessing of our style files and allow us to use some powerful features we would not normally be able to access. I will spend the bulk of this post outlining these features and why they are useful to implement in your next project.

Ways to use Stylus

At Caktus we use Stylus as our CSS preprocessor of choice. It has many of the same features as LESS and SASS; however, the added benefit of Stylus comes from its flexible syntax, ability to run functions, and out-of-the-box custom selectors.

In Stylus, you can structure your style files with more syntactic freedom than other CSS preprocessors. For example, if you prefer a more simplistic approach to writing style rules, you can do so:

    font 1rem Helvetica Neue, sans-serif
    margin 0
    padding 0

If you prefer the regular CSS syntax, Stylus supports it. Or, if you prefer any variation in between, Stylus also supports that. With flexible syntax, team members can now determine how to write CSS styles and patterns that work for the team as a whole - which has proven to be helpful for team members who do not come from a front-end background. More importantly, flexible syntax allows us to structure our CSS to be less noisy, which improves clarity and comprehension.

New to CSS preprocessors is Stylus' ability to utilize functions. I find this feature particularly useful when computing values that should not be static but rather relative to other values. In a simple example, we can now set the margin of an element based on a specific formula that is relative to an element's position within a container.

count = 4
divideByHalf(start, end, val)
    if start > end
       return val
       return divideByHalf(start + 1, end, val/2)

    for num in (1..count)
                    margin: divideByHalf(1, num, 3.5vw)

Evaluates to:

section *:nth-child(1) {
  margin: 1.75vw;
section *:nth-child(2) {
  margin: 0.875vw;
section *:nth-child(3) {
  margin: 0.4375vw;
section *:nth-child(4) {
  margin: 0.21875vw;

Stylus comes with many useful selectors. You can now use partial references and even ranges in partial references to assign an attribute to a nested element without worrying that the parent element will also inherit this attribute.

        display: none

        ^[0]:hover ^[-1..1]
            display: block

Evaluates to:

.menu .sub-menu {
  display: none;
.menu:hover .sub-menu {
  display: block;

Supercharge with PostCSS

Stylus has a lot of useful functionality and features out of the box, but we can do one better: we can postprocess our style files to be even more robust and future-forward! The main library we use to achieve this is PostCSS.

PostCSS allows us to use a plugin called CSSNext (as well as many other plugins), which in turn enables the use of CSS4 features and autoprefixer. These libraries grant us the luxury of offloading some mental baggage when it comes to writing styles and browser-specific support for all the different browser versions, as well as giving us the freedom to experiment with new technology to make our jobs easier and more sane.

So, what does this look like?

First, we need to set our source files and our environment flag:

var options = {
    stylus: {
        src: './myproject/static/stylus/index.styl',
        watch: './myproject/static/stylus/**/*.styl',
        dest: './myproject/static/css/'
    development: true,

Next we create the gulp pipeline:

var stylusTask = function () {
    return gulp.src(options.stylus.src)

Nothing too crazy here; we preprocess our style files and combine them into a single file called bundle.css and put it in our specified CSS destination folder.

What if we wanted to minify our CSS file to cut down on file size, but also include a way to debug by referencing the original style file where a rule originates from? We pass in a parameter to Stylus to minify the files and enable sourcemapping:

var stylusTask = function () {
    var stylusOpts = {
        compress: true

    return gulp.src(options.stylus.src)

How about integrating some useful plugins that allow us to automatically prefix our styles and allow us to use new technology like CSS Grid, CSS Variables, CSS4 features, etc? We can specify which plugins PostCSS should use for the features we want. In our case, CSSNext includes Autoprefixer, as well as a slew of new features:

var stylusTask = function () {
    var stylusOpts = {
        compress: true

    var plugins = [
        cssnext({browsers: ['last 2 versions']}), // we tell autoprefixer to prefix rules to support the last 2 versions of all browsers

    return gulp.src(options.stylus.src)

What if we want to modify the gulp pipeline in specific, local development only cases? We can use gulpif and lazypipe to pipe in extra tasks conditionally:

var stylusTask = function () {
    var stylusOpts = {
        compress: true
    var plugins = [
        cssnext({browsers: ['last 2 versions']}),
    var devHelpers = lazypipe()
        .pipe(notify, function() {
            console.log('CSS bundle-stylus built in ' + (Date.now() - start) + 'ms');

    return gulp.src(options.stylus.src)
        .pipe(gulpif(options.development, devHelpers()));

Lastly, what if we want to run the gulp pipeline in conjunction with other functions, based on our environment setting? We can achieve this by checking our environment setting variable and running the appropriate commands:

var options = {
  stylus: {
    src: './myproject/static/stylus/index.styl',
    watch: './myproject/static/stylus/**/*.styl',
    dest: './myproject/static/css/'
    development: true,

if (argv._ && argv._[0] === 'deploy') {
    options.development = false
} else {
    options.development = true

var stylusTask = function () {
    var stylusOpts = {
        compress: true
    var plugins = [
        cssnext({browsers: ['last 2 versions']}),
    var devHelpers = lazypipe()
        .pipe(notify, function() {
        console.log('CSS bundle-stylus built in ' + (Date.now() - start) + 'ms');

    var run = function () {
        return gulp.src(options.stylus.src)
        .pipe(gulpif(options.development, devHelpers()));

    if (options.development) {
        var start = Date.now();
        console.log('Building Stylus bundle');
        stylusOpts.compress = false;
        gulp.watch(options.stylus.watch, run);
        return run()
        return run()

gulp.task('css', stylusTask);

gulp.task('rebuild', ['css'])

gulp.task('deploy', ['rebuild']);

Final Thoughts

By customizing our CSS bundling process to take advantage of preprocessing and postprocessing options, we can now claim that our front-end packaging does the following:

  1. Accounts for multiple development environments (local, staging, production) by modularizing the CSS Gulp pipeline task.
  2. Uses style preprocessing that allow us to write style rules using familiar programming paradigms.
  3. Uses style postprocessing to ensure feature support and polyfills for all browsers, and enables us to safely implement experimental technology in production-ready settings.

If you found that helpful, we have more CSS and front-end tips on the blog.

Caktus Group2018 Event Shortlist

The Caktus team attends a number of conferences each year to learn about the latest tips and tools. Several of us also go to events to share knowledge as speakers or sprint leaders. Using our varied experiences, we’ve put together a list of the events we’re looking forward to next year.


UX Conference - Los Angeles, CA (UX)

NN Group hosts this conference for UX best practices. Our team appreciates the chance to train with industry thought leaders and take advantage of certification opportunities. Courses cover a range of skill levels, from beginner to advanced, so there’s a little something for everyone.

For more information about why you should attend, NN Group has an article including reasons, testimonials, and video. Not able to go to the West Coast? There is also a Washington, D.C. event in April.


TestBash Brighton - Brighton, United Kingdom (QA Testing)

Based on a good experience at TestBash Philadelphia, our QA team is excited about next year’s event in Brighton, UK. TestBash is described by the team as an opportunity to make connections and discuss the future of QA.

DisruptHR - Multiple Locations (Management / HR)

Recruiting and retaining top employees is important for any business. This conference is recommended by our HR staff for managers and HR professionals looking to try something new to support, grow, and encourage their teams. It's full of interesting lightning talks on the latest trends in HR with a modern perspective, leaving attendees feeling inspired and ready to approach challenges from a different angle.

Some locations also have events in April.


Global Scrum Gathering - Minneapolis, MN (Agile / Scrum)

The event of the year for Scrum masters. Head to Minneapolis in April (or London in October) next year to learn new applications and best practices for Scrum.

Wondering if it’s for you? Scrum Alliance has their list of top 10 reasons to attend.

Quality Jam - Atlanta, GA (QA Testing / Development)

Quality Jam is an event for those looking toward the future of QA testing. It promises to provide real-world solutions to software development challenges. Our team hopes to pick up the latest techniques for testing while getting some hands-on training.

deliver:Agile 2018 - Austin, TX (Agile)

deliver:Agile focuses on the tools and techniques behind Agile engineering and architecture. This conference welcomes not only project managers and developers, but also data scientists, UX and QA professionals, cloud specialists, and more in recognition of the diverse set of skills found on an Agile team.


PyCon 2018 - Cleveland, OH (Development)

While there are many great tech, Python, and Django events, PyCon is by far the most anticipated event here at Caktus. Why is it so popular? Our team appreciates the talks, tutorials, and development sprints; enjoys exchanging information on innovating with Python; and picks up insights from other Pythonistas.

There’s also the interpersonal aspect. Each year, Cakti look forward to reconnecting with peers, building new relationships, and uncovering partnership opportunities. The size of the conference, with nearly 3400 attendees in 2017, means that there is ample opportunity to meet Python enthusiasts and community leaders.

Those of our team who attend always pick a few of their favorite talks out of the many good ones delivered and add them to our PyCon Must-see Series. If you’ve never been to PyCon and are looking for a taste of what it’s like, check out those videos.


Eyeo Festival - Minneapolis, MN (Development / Data Visualization)

Data gains an extra punch when combined with visuals, and this event has been described by our team as “dataviz heaven”. Topics include everything from gestural computing to data art, so if the intersection of data and design is your thing, take a look at this one.


Agile2018 - San Diego, CA (Agile / Project Management)

Our project managers and Scrum master highlight Agile2018 as a conference that provides an excellent opportunity to learn trends and new ideas. This is a good generalist conference for anyone working with Agile and encompasses a wide range of topics.


DjangoCon US 2018 - San Diego, CA (Development)

DjangoCon is another staple for the Caktus team. As a Django-focused company, Caktus has sponsored and attended the last eight DjangoCon events as well as sending numerous team members. It’s a smaller conference than PyCon, offering a friendly atmosphere and an inclusive, supportive community of Django developers, with talks on a range of relevant topics. In 2017, those talks included one from a Caktus developer on writing an API for almost anything.

If you develop with Django, want to learn more about the framework, or are looking for Django-driven software vendors, this is a good conference.

All Things Open - Raleigh, NC (Development / Open Source)

When they say “all things open,” they’re not kidding. Open source, open web, and open tech are all covered here. This is a big event, with 3200+ attendees in 2017, so get ready to make new connections in the open community.

One of the other reasons we like this conference is the focus on diversity and inclusion, with initiatives to ensure underrepresented groups can attend.

Check out their list of reasons to go.


Red Hat Agile Day - Raleigh, NC (Agile / Project Management / QA)

This conference is free and there are always some good talks that inspire our team. This year’s included a presentation by an opera singer, which provided new perspectives in thinking about Agile’s applications. Consider going for a fresh take on Agile.

This event was last held in October 2017.

OnAgile - Online Event (Agile / Project Management)

Another conference presented by Agile Alliance, OnAgile is one of the more affordable events for attendees and accessible for those who can’t catch it live, with recorded sessions for later viewing. This event aims to bring Agile to everyone and was last held in October 2017.

Caktus GroupAWS re:Invent Recap

As a certified Amazon Web Services (AWS) Consulting Partner, Caktus sent a member of the team to AWS re:invent this year to meet other solution providers, discuss with AWS representatives how to leverage our partnership to best serve our clients, and of course, get hands-on experience with both existing and newly-revealed AWS services.

With nearly 40,000 attendees, 1,000+ sessions, and 40 tracks, all spread out across multiple venues, it was by far the largest conference I have had the privilege of attending. As a first time attendee, I found the conference’s mobile application critical for making the most of the experience.

Conference organizers did a fantastic job of adding overflow and repeat sessions for popular topics. It probably comes as no surprise to learn that serverless, containers, and the Internet of Things (IoT) seemed to attract the most attendees. If you were unable to attend in person, or were there and missed interesting sessions, Amazon promptly made the sessions available on YouTube.

The Global Partner Summit provided a one-stop location to interact with other partners and attend breakout sessions related to the partner experience. It was great hearing how other solution providers tackle similar problems, such as repeatable, maintainable deployments, and learning about the 2018 roadmap for the AWS Partner program.

Caktus has utilized AWS as part of many clients’ solutions, such as iN DEMAND’s digital archiving system and University of Chicago’s online survey platform. Interested in learning more about how Caktus can assist you with your AWS and project needs? Contact us to get started.

Caktus GroupCaktus is Excited about Django 2.0

Did you know Django 2.0 is out? The development team at Caktus knows and we’re excited! You should be excited too if you work with or depend on Django. Here’s what our Cakti have been saying about the recently-released 2.0 beta.

What are Cakti Excited About?

Django first supported Python 3 with the release of version 1.5 back in February 2014. Adoption of Python 3 has only grown since then and we’re ready for the milestone that 2.0 marks: dropping support for Python 2. Legacy projects that aren’t ready to make the jump can still enjoy the long-term support of Django 1.11 on Python 2, of course.

With the removal of Python 2 support, a lot of Django’s internals have been simplified and cleaned up, no longer needing to support both major variants of Python. We’ve put a lot of work into moving our own projects forward to Python 3 and it’s great to see the wider Django community moving forward, too.

In more concrete changes, some Caktus devs are enthused by transitions Django is making away from positional arguments, which can be error-prone. Among the changes are the removal of optional positional arguments from form fields, removal of positional arguments form indexes entirely, and the addition of keyword-only arguments to custom template tags.

Of course, the new responsive and mobile-friendly admin is a much-anticipated feature! Django’s admin interface has always been a great out-of-the-box way to give staff and client users quick access to the data behind the sites we build with it. It can be a quick way to provide simple behind-the-scenes interfaces to control a wide variety of site content. Now it extends that accessibility to use on the go.

What are Cakti Cautious About?

While we’re excited about a Python 3-only Django, the first thing on our list of cautions about the new release is also dropping support for Python 2. We’ve been upgrading a backlog of our own Django apps to support Python 3 in preparation, but our projects depend on a wide range of third-party apps among which we know we’ll find holdouts. That’s going to mean finding alternatives, pushing pull requests, and even forking some things to get them forward for any project we want to move to Django 2.0.

Is There Anything Cakti Actually Dislike?

While there’s a lot to be excited about, every big change has its costs and its risks. There are certainly upsets in the Django landscape we wish had gone differently, even if we would never consider them reasons to avoid the new release.

Requiring ForeignKey’s on_delete parameter

Some of us dislike the new requirement that the on_delete option to ForeignKey fields be explicit. By default, Django has always used the CASCADE rule to handle what happens when an object is deleted and other objects have references to it, causing the whole chain of objects to be deleted together to avoid broken state. There have also been other on_delete options for other behaviors like prohibiting such deletions or setting the references to None when the target is deleted. As of Django 2.0, the on_delete no longer defaults to CASCADE and you must pick an option explicitly.

While there are some benefits to the change, one of the most unfortunate results is that updating to Django 2.0 means updating all of your models with an explicit on_delete choice…including the entire history of your migrations, even the ones that have already been run, which will no longer be compatible without the update.

Adding a Second URL Format

A new URL format is now available. It offers a much more readable and understandable format than the old regular-expression based URL patterns Django has used for years. This largely a welcome change that will make Django more accessible to newcomers and projects easier to maintain.

However, the new format is introduced in addition to the old-style regular-expression version of patterns. You can use the new style in new or existing projects, and you can make the choice to replace all your existing patterns with the cleaner style, but you’ll have to continue to contend with third-party apps that won’t make the change. If you have a sufficiently large enough project, there’s a good chance you’ll forgo migrating all your URL patterns.

Maybe this will improve with time, but for now, we’ll have to deal with the cognitive cost of both formats in our projects.

In Conclusion

Caktus is definitely ready to continue moving our client and internal projects forward with major Django releases. We have been diligently migrating projects between LTS releases. Django 2.0 will be an important stepping stone to the next LTS after 1.11, but we won’t wait until then to start learning and experimenting with these changes for projects both big and small.

Django has come a long way and Caktus is proud to continue to be a part of that.

Caktus GroupCaktus Discovery Workshops

Before an app can be built, the development team needs to know what they are supposed to be building. How do they establish that? With requirements gathering.

Requirements gathering

Product discovery, or requirements gathering, happens on every development project. This isn’t a service, but rather an internal process at a development company. Some of it must be carried out before anything can be designed or built, and some of it happens throughout the development project. While it may seem that this just adds time to the project, it is vital to delivering a product that meets the project objectives.

Requirements gathering may be as simple as having the client stakeholder, project manager, and developers review existing documentation and materials. However, often there is much more preparatory work to be done in order to build a solution that addresses the client’s business goals and the end user’s needs.

Product discovery ensures that all client stakeholders and the product team are in alignment on what is being built and why. This blog post explains the early stages of product discovery in more detail, but the process may include the following steps:

  • A review of the business and project goals.
  • A competitive landscape review, to gain an understanding of what has already been done and how well it’s working.
  • In the case of content-heavy websites, a content audit to determine what is available and how users are intended to interact with it.
  • A discovery workshop to determine requirements in greater detail.

Discovery workshops

Some projects need greater definition than is available at the beginning. They may lack documentation, or it may have become clear at some point in the sales process that the client has a great idea, but isn’t quite sure how to build it yet. Lack of consensus with or buy-in from other teams or departments on the client’s side may also be an issue.

If that’s the case, one tool to use as part of the initial discovery phase is a discovery workshop. The way in which the workshop is carried out is unique to each client and depends on the goals and budget of the project, but at Caktus we recommend starting with one of two techniques: user story mapping or content modeling. The technique used depends on whether the project is to build a web app or to develop a customer-facing marketing website.

What’s the difference? With a web app, the focus is on completing tasks, such as data input or interacting with the website to post an update. For a marketing website, the objective is to deliver content. Users must be able to easily locate content such as videos, PDFs, or even simple blog posts, and take the desired actions to consume it (i.e., read, bookmark, download, or share).

Let’s look at how user story mapping and content modeling form the basis of a discovery workshop for web apps and websites.

User story mapping

For web app development projects, user story mapping is essential to giving design, coding, UX, and testing teams an understanding of user flows, user tasks, and client priorities. It also ensures that essential features haven’t been overlooked.

User story mapping is a technique used to map out the user flows and tasks an app must support. A top-level flow of user actions (the narrative flow) is identified first. Next, the different tasks and subtasks necessary to accomplish the top-level actions are laid out beneath. Finally, tasks are sorted above or below a prioritization line to establish the most valuable features for inclusion in a minimum viable product (MVP).

A user story map with priority line indicating the most valuable features. A diagram of a user story map.

The greatest value in carrying out user story mapping is building a shared understanding between Caktus and client teams around the features the application must support to deliver business and user value, and the order of priorities.

It also reduces the amount of guesswork that goes into estimating the time and money required to complete the project. It enables the team to estimate coding, UX, and QA work with more confidence, providing better value for money and a more accurate scope of work.

If the client decides to move forward with developing the project, an additional bonus lies in the ability to translate the map into user stories and to create a prioritized development backlog. This is the list of tasks that the team will focus on developing. The project manager organizes those tasks using existing data about the team’s pace and the information gained during requirements gathering.

Read more about user story mapping and how it is translated into a release plan.

It is important to note, however, that only an initial prioritization is done based on user story mapping. In Agile development, there is always room to update and re-prioritize tasks, so it shouldn’t be assumed that the backlog established at the beginning of a project is the final one or that all of the tasks listed at the start will be completed if there are changes to the project along the way. The project manager works with client stakeholders to ensure that any changes to budget, deadline, and desired features are appropriately accounted for in prioritization.

Content modelling

For marketing websites intended to deliver content, a discovery workshop focused on content modeling provides a more detailed understanding of how the website should be structured in order to facilitate content delivery.

For an existing website, a content audit is a necessary prerequisite. A spreadsheet detailing the following is a good place to start:

  • Different content types, associated page types, and file formats
  • The target audience
  • Desired user actions (e.g. watch, download, interact)
  • Intended placement on the website
  • How they will be updated and who will carry out the updates
  • Any other relevant notes such as priority, future plans, or preferences

A content modeling workshop helps refine content types and their relationships. It starts with asking questions about the needs users have when they come to the website, identifying nouns used to describe user needs and goals, and analyzing which content types connect to each other and how.

Content types are then broken down into chunks in the process of asking what content facets each content type is comprised of, and how those chunks could be best developed to support display across various screen sizes. This activity sets up the client stakeholders for the final tasks of writing new or amending existing content, which they do independently after the workshop.

For a new website without fully developed content, stakeholder interviews are a good method to generate the information needed to begin understanding what content might be appropriate to support user goals.

Other methods and techniques

Projects with more time and budget could include other activities. For example, diagramming the application architecture in addition to user story mapping helps in understanding relationships between an otherwise linear representation of user flows within a user story map. Ideation can help generate ideas for a new application, while sketching can help identify solutions for existing or new interfaces.

Any of the techniques mentioned in this post can be carried out individually or in conjunction with the others. They can be done outside of a workshop as well. However, our experience at Caktus is that a discovery workshop pulling in all of the stakeholders is most effective at getting to the heart of a project.

It should also be mentioned that while a discovery workshop is done at the beginning, the process of discovery doesn’t end when development begins. It occurs throughout the course of the project, especially when the project follows Agile methodologies.

Why do a discovery workshop?

Why spend extra time and money on a discovery workshop when you already know what you want?

It’s true, not every project needs a discovery workshop as part of the initial discovery phase. When clear documentation, priorities, and scope are available, sharing those and having a conversation may be the extent of what is needed for requirements gathering.

We’ve found that the best candidates for a discovery workshop are those projects where:

  • Documentation is available for an existing version of the app or website, but significant changes are desired for an updated version.
  • The project is complex in terms of dependencies, the number of interactions, or data structuring.
  • Teams on the client side are unsure how best to proceed, or have conflicting visions of what features would best fulfill user needs and/or business objectives.
  • The target users and key user tasks and flows haven’t been mapped out.

If one of those sounds familiar, or if you’re generally interested in finding out more about discovery workshops at Caktus, get in touch and tell us about your project. Still researching? Try this post about getting started with outsourced web development.

Caktus GroupDeveloping Sharp Interns

Our internship program sustains Caktus’ growth, challenges and reinvigorates our development practices, builds our relations with the local tech and wider Django communities, and hones our operational practices as a company. This post shares our guiding principles for how we structure our developer internship to achieve these goals, while providing a meaningful and edifying experience for the interns we hire.

Put in the Necessary Time and Resources

Long before we even begin recruiting, hiring, and onboarding a candidate, our team puts in extensive prep work in anticipation of two to three interns a year. We are detailed in our search, set aside a specific portion of our recruiting budget for the position, and cast a wide net. Considerable time and resources are devoted to finding an ideal candidate. The reasons for this are many:

  • It is costly and disruptive to hire the wrong person; we want to get it right.
  • Our internship partially functions as a pipeline for identifying local talent. We have to look at each and every candidate as though he or she could be joining our team full time.
  • Having a paid internship is one way to open the gates of the tech industry to those who have traditionally been shut out. It makes tech jobs more accessible to a wider pool of diverse talent. We want to get this opportunity in front of as many people as possible.
  • Our internship is unique in that it is fairly flexible and self-driven. It takes a candidate with a sufficient level of independence and moxie—balanced by the humility to ask for help when needed—to make this structure work. Finding such a candidate requires a significant amount of effort.

Treat Each Intern Like an Employee

Central to the success of our internship program is a deceptively simple tenet: treat each and every intern like an employee. It seems obvious, but many companies do not do this. For us, it is the most important element to an internship. Not only does it create an atmosphere for growth, but it also accomplishes the actual goal of an internship: introducing a novice to the real experience of working as a part of a development team.

Real Teams, Real Work

Rather than siloing our interns onto separate teams and assigning them busy work or the task of creating tools that will never be used again, Caktus interns are placed on a real team with our full-time developers. They are wholly integrated team members, taking part in all Scrum activities and any other team-related meetings.

Like any other developer on their team, interns self-select their work during sprint planning. They participate on real projects that will continue to be used and added to. They are doing work others will need to use later, learning best practices for writing clean, scalable code. At heart this means that our internship is not an academic experience, it is a practical one. We have found that this practicality serves as the best atmosphere in which an intern can grow.

And what do we get from instilling such trust in our interns and bringing them on as full-time team members? Having an intern fully participate on a development team encourages a more collaborative culture of mentorship in which questions are welcome and everyone remains open to fresh perspectives.

“I was encouraged to review my teammates' code, and my comments were taken seriously. I was always respected as a valuable part of the team.” - Charlotte Mays, Intern 2016 / now full-time developer

The Full Gamut of Operational Processes

An internship is a great way to practice, solicit feedback on, and fine-tune operational processes. Our internship program has been a great way for us to improve our interviews as well as our candidate screening and hiring practices. From onboarding to exit interview, we take our intern through the full process like any other employee. Not only does this give the intern necessary career experience, but it also creates a helpful feedback loop for internal process improvement.

Other Elements for Growth

Of course, treating an intern like an employee requires a lot of trust as well as the proper environment for success. Our interns themselves need to be sufficiently driven and sufficiently humble, and the structure of our program needs to support this balance. We have found the most success in allowing a self-determined and malleable learning plan, while providing the mentorship necessary to lend structure and direction.


We often describe our internship as a “choose your own adventure.” Not only do we remain flexible in terms of start and end date and work hours, but also in regards to the tasks an intern may take on. As mentioned above, interns self-select the tasks and features they will work on. This requires a delicate balance between:

  1. Selecting tasks they are capable of completing in a development sprint and,
  2. Selecting features that will challenge and develop their current skill set


To achieve this delicate balance, guidance through mentorship is key. Every intern is assigned a mentor from day one of their internship. Interns and mentors meet regularly to set, discuss, and track progress on goals, give and receive feedback, and evaluate personal and professional growth.

We hire interns with a variety of goals: from students still in school exploring web development as a potential career, to young adults fresh out of school seeking to enter the technology sector, to individuals seeking a career change after having worked in other industries. Whatever the context, we make sure to cater our mentorship program to each intern’s self-identified goals.

Improve the Community

To create a truly meaningful experience, it is important to keep our own end goals in mind as a company. Why are we doing this in the first place? We built our internship program to provide an opportunity that both personally rewards a learning developer and also improves the Django and local tech community. This means we focus on two main goals throughout the internship:

  1. Instilling development best practices: mentoring developers who will go on to write code the right way.
  2. Imparting Caktus’ values: producing developers who will be curious, empathetic, seek excellence, and give back to their community, whether that be through open source contributions or future mentorship.

Ultimately, we love helping to mentor and grow developers in the community, and our internship program is a key part of that effort.

Learn more about our program and what it’s like to be an intern at Caktus from a former intern’s perspective.

Caktus GroupGetting Started with Outsourced Web Development

In researching outsourced web development, you may have come across a few different ways to get your project built and have some questions as a result. How well defined do the project requirements need to be prior to starting development? Will Waterfall or Agile methods deliver the best results? Should you look for a consultancy offering team augmentation or in-house Agile-based work? What are the ramifications for your project of picking one or the other?

Let’s take a look at each of these questions, and what we recommend for different projects.

Project definition

Moving forward with a project happens when three key pieces of information are known, including:

  • Budget: How much are you willing and able to spend?
  • Timeline: How quickly do you need the final deliverable?
  • Project requirements, e.g. a product roadmap, release plan, and/or defined MVP: Do you have a clear idea of what you want to build?

With this knowledge, a project can be estimated, giving you a better idea of how much can be built for your budget, whether there are time- or cost-saving alternatives, and whether additional or different work could add value to the project.

If you don’t have a timeline or budget, but do know what you want to build and can provide requirements and documentation, a team can evaluate the project and provide a cost and projected timeline.

Or, perhaps you know your timeline and budget but are still working on the third piece: clearly defining what exactly you are trying to build. Even if you think this part is figured out, it often happens that stakeholders have different visions for the project and lack a shared understanding, which can be time-consuming and costly to address later.

How do you check that everyone involved in the project really does have the same understanding of what will be delivered? A discovery phase is an excellent first step.

The discovery phase of a project consists of steps aimed at gaining a deeper understanding of the product, including its contexts, its users, and the business goals it is meant to support. One approach employed during the discovery phase is a discovery workshop, which may include a number of activities aimed at determining what should be developed and what the priorities are.

In the discovery workshop, the process of product discovery aids in framing the problem the product should solve; identifying user roles; mapping out user actions, tasks, and workflows; and finally sketching out ideas for a product that addresses each of those steps based on the unified vision gained from the workshop. Furthermore, techniques like user story mapping contribute greatly to building a high-level release plan that clearly prioritizes the most valuable features for development and gets the project off to a strong start.

Waterfall or Agile?

Once you know what you want to build, how quickly you need it, and how much you can spend, it’s time to look at the different ways of developing the application or website.

Waterfall and Agile are both methodologies, or processes, to guide software design and development. The principles of each methodology inform how the project is managed in terms of how it moves through each phase of development, how and when feedback is received and implemented, and when testing is carried out.

One of the main differences between the two methodologies is that Waterfall follows a linear model, where each succeeding phase is started only after the previous has been completely finished. In this model, the client doesn’t see any of the work until the project nears completion. The different team roles (designers, developers, quality assurance, and so on) don’t collaborate throughout the project, only seeing what the other team has built when it’s their turn to begin, and testing is carried out at the end.

Agile follows an iterative model. In iterative software development, work is broken into chunks that can be completed in a short time frame, with testing ongoing throughout. At the end of each iteration, the goal is to have a potentially complete product increment which can then be built on as needed.

Another difference between the two methodologies is that Agile development considers change to be part of the process. With Waterfall, it can be increasingly difficult to make changes or implement feedback as the project approaches completion. By the time the project is shared for review, it may be too late to make adjustments. In contrast, Agile teams produce usable software to give feedback on throughout the process and are able to implement that feedback more easily.

At Caktus, we use Agile frameworks like Scrum and Kanban to develop projects because it enables us to act on feedback and ensure we’re delivering the most valuable features first, a tenet of the Caktus Success Model. It also ensures that we’re focusing on those features which have been prioritized as most important by the client, even when priorities change.

What does this mean for our clients? In short, rather than asking a client to pay for work and then wait until the end of a project to see the results, we present production-ready features on a regular basis. That work can then be evaluated by client stakeholders, and feedback can be prioritized and implemented throughout the project in a continuous loop.

It’s worth mentioning that some level of flexibility in at least one of the three elements of a project as defined in the first section - scope, time, or budget - is necessary to work on Agile development. It is this flexibility that enables a team to accommodate any changes that may arise and to respond to client feedback as the project progresses.

Client-managed project, or vendor-managed project?

In addition to the methodologies themselves, there are a few different ways to manage the projects. If you have an internal development team and project manager (PM), client-managed team augmentation may be an option. This is most feasible when the need for staff is temporary and one or a few roles are needed for a set number of hours per week to support the on-time completion of a project.

Team augmentation is most effective when a clear product roadmap is in place and you have an internal PM. If you lack a project manager or aren’t entirely sure what tasks need to be carried out and when, it’s common for a contractor to lack clarity on what tasks take priority and how they should be spending their hours. In that case, a more effective option may be an Agile-based project entirely contracted out to a custom development firm.

In this scenario, the external team is responsible for maintaining the backlog of tasks and features (with your input and feedback on priorities), determining what is worked on in each development period, and building and testing the work. All tasks, including project management, development, and quality assurance testing, are carried out by the external team.

That doesn’t mean the project is out of your hands. While working with the external team, you still play a key role as a stakeholder. The stakeholder stays in touch with the team, giving feedback and communicating priorities, and maintains the overall vision of what should be produced. There are regular opportunities to see progress as well as to communicate what is going well, what can be improved, and how well expectations are being met. This enables the team working on the project to deliver a product aligned to your specifications and objectives.

Get started

Ready to move forward with development? Caktus offers discovery workshops, team augmentation, and Agile-based development services. Even if you’re still unsure what will work best for your project, our experienced team can help determine which solutions will be most effective. Contact us to get started.

Caktus GroupShipIt Day Recap Q4 2017

Our quarterly ShipIt Day has come and gone, with many new ideas and experiments from the team. As we do every quarter, Caktus staff stepped away from client work to try new technology, read up on the latest documentation, update internal processes, or otherwise find inspiring ways to improve themselves and Caktus as a whole. Keep reading to see what we worked on.

Style and Design

Last ShipIt Day, our front-end and UX designers started work on a front-end style guide primer. Work continued on this ShipIt Day, with Basia working on typography and color. The guide now includes documentation explaining the handling of font properties and responsive font sizes, typography selectors and how they render with given settings, and guidance on how font sizes can be modified according to different placements or needs. The color palettes now show how colors should be displayed and used in context.

Basia also started learning CSS Grid with the Grid Garden game and delivered a talk at Iron Yard called “Design Primers for Devs.”

JIRA Improvements

JIRA is one of the project management tools we use at Caktus, and a recent update changed the setup of our account. Sarah and Charlotte F worked to ensure that all of our projects and boards are tidy and reduced complexity in access, then demoed the changes to the team.

Educational Content

Recognizing a gap in content to help potential clients learn more about how we work at Caktus, Julie and Whitney plotted out the Sales process and brainstormed visual methods for presenting the information.

Photo Booth App

Neil looked into learning or refreshing his knowledge of a few different technologies by building a progressive web app (PWA) that he could use to scan barcodes. That idea morphed into a photo booth app, which provided an opportunity to learn how to access a laptop camera from within the browser. He also looked at using IndexedDB for storing blobs of binary data generated by the photos taken. Creating the app required manipulating canvas image data to produce a glitched effect and using React in concert with all of these, plus the Materialize CSS framework.

API Star

Mark took a look at API Star, a web API framework for Python 3. He dove into the type system and route definitions, finding that they can do useful things out of the box, like automatic validation of inputs, HTTP method based routing, and a simple path matching syntax like the upcoming changes in Django 2.0. The framework also allows setup of authentication, another helpful feature. While the immaturity of the project shows at this time, it had some interesting use of type annotations and promise for improvements in the future.

In addition to API Star, Mark worked on improving a project test suite, identifying the reason why it ran so slowly and speeding it to a fraction of the previous time. This required learning a bit more about Factory Boy and making better use of the SelfAttribute to reduce the number of models created when using sub-factories.


One of the services we offer at Caktus is managed hosting. To ensure that we’re using the best technology, Scott decided to evaluate Prometheus, an open source monitoring software. He found that it was fast and easy to get a server back up but is still evaluating whether it’s a fit for Caktus.

Dokku and AWS Web Stacks

Colin recently helped redeploy a client website using Dokku and wanted to try out our AWS Web Stacks project to see if it could be used with Dokku for a Code for Durham project. One of the challenges he encountered was the use of PostGIS geodata in the project, which needed to be configured within Dokku and imported. However, Dokku’s simple interface and automatic requirements installation meant that everything started working nicely.

He thinks that for projects that don’t need a lot of web servers, Dokku is a good alternative for projects needing a lot of web servers.


Documentation is important to software development, so Jeff used his ShipIt Day time to look into creating a unified set of documentation for our Ansible roles and tooling. He also worked with Dmitriy, Phil, and Vinod as they delved into Ansible.

Hello Tequila

Dmitriy revisited the Hello Ansible app and used it to learn more about Tequila, by setting up a basic Django project and using Tequila repos to deploy it. He took notes on how to set up a project and deploy it, finding a few bugs in the process. Next ShipIt Day, he’s hoping to create a readme or walkthrough.


While attending the CSS Dev Conference in New Orleans, Kia heard about a new JavaScript framework called Vue.js. For her ShipIt Day Project, she decided to build a Twitter-like app to try it out.

Kia liked how the exercise required her to look at a design feature and break it down into its individual components, then recreate and reassemble them. She sees the framework as potentially valuable for website relying on reusable components, and admires its focus on modularity and scalability.

QA Training

Our QA team took advantage of their ShipIt time to review training videos and materials for American Software Testing Qualification Board, with the aim of reaching expert level. Robbie hopes that the certification will give him the skills and industry recognition to further his career as a QA analyst. Gerald was pleased to find that real world examples were used, and that the curriculum design enables testers to make connections between the certification courses and the real world.

Book Club App

For our Q2 2017 ShipIt Day, Charlotte M started on an app to help the Caktus Book Club vote on the next book to read. This ShipIt Day, Dana joined her in improving the app’s features and usability. Together, they added the ability to edit or delete a book from the list, and updated elections so that they can be deleted if need be. Previously, elections used to have a set open date that couldn’t be changed, but that has been edited so that books can continue to be added before setting a date.

They also planned out functionality to add for the next ShipIt Day, including tracking respondents, improved list navigation or search, and more graceful error handling.

CloudFormation and AWS Web Stacks

Tobias continued to build on his Amazon Web Services expertise with ongoing work to our AWS Web Stacks project. He added a CloudFront Distribution for the app server to take advantage of its front-end caching capabilities and an Elastic Load Balancer to the Dokku stack.

AWS Web Stacks is open source, and you can find the code on GitHub or learn more about it in Tobias’ recent post about automating a Dokku setup with AWS Managed Services.

Django Cache Machine

Vinod worked with Tobias to add long-overdue support for Django 1.9, 1.10, and 1.11 to the open source project Django Cache Machine, which provides automatic caching to Django models. In the process, they learned a lot about the details of Python iterators.

Caktus GroupThe Opera of Agile: A Striking Performance at Red Hat Agile Day

Have you ever heard anyone sing opera during a tech-focused conference? Neither had I, until now.

Red Hat Agile Day, held in downtown Raleigh, recently provided this unique opportunity. The theme of the 2017 Red Hat Agile Day was “Agile: brokering innovation; bringing together great ideas.” The conference certainly lived up to that theme with a diverse line-up of speakers, including a former professional opera singer who bookended his presentation with songs. One was a creative, original ballad about being an Agile product manager (see the lyrics here), which he delivered at full blast, because how else can you sing opera?

The exuberant vocal performances by Agile Product Manager Dean Peters certainly took the attendees by surprise - the shocked looks around the room were priceless. Instantly, I knew this was not going to be the average presentation on the Agile mindset, process, or procedure. Peters’ presentation on “Five Things I Learned About Lean MVP as a Professional Opera Singer” was not only entertaining but also informative. His delivery also makes it one presentation I won’t easily forget.

Peters compared his experiences as an opera singer to those as a coder and Agilist, making connections between the stage and the computer screen. He explained that operas are produced iteratively, being an aggregate of many small components, with production milestones, and a release plan. Yup, definitely sounds like Agile software development.

He also went into detail about how developing a stage character is like developing a user experience persona for a website, and that doing so could increase empathy and understanding for clients, stakeholders, and end users, ultimately improving the Minimum Viable Product (MVP). I agree that creating personas is a valuable practice since it helps the product owner and the development team to better understand the audience and end-users, ultimately leading to a product that’s more tailored to the end-user’s needs. To help construct personas, Peters recommends using characterization tools from theater and leveraging acting exercises and games to gain empathy. I plan to keep these exercises in mind and hope to use them at Caktus.

Practice is another key element that Peters highlighted. Just as actors practice for a performance, developers should practice for a client or sprint demo. Peters elaborates on this in a blog post on “10 things singing opera taught me about product demo prep.”

The Broader Applications of Agile

As it turned out, the connection between opera and Agile development wasn’t as much of a stretch as I thought it was, and Peters’ comparisons were insightful and easy to follow (his slides are available online). It made me realize how inclusive and universal the Agile mindset is, and how applicable it is to other professions, not just software development. In reality, it is probably already being applied without us even realizing it, like in the writing process. Writing this blog post, for example, was an iterative and Agile process, broken down into phases which could be compared to sprints - drafting, reviewing, editing, finalizing, and then releasing.

While realizing the broader applicability of Agile, a statement on the Red Hat Agile Day website struck me. It challenged attendees “... to connect the ideas and insights you'll be gathering for new innovation.” The sharp team of seven Caktus project managers and quality assurance analysts who attended the conference have already discussed some ideas that were spurred by the various presentations at Red. For example, we’re looking into Acceptance Test-Driven Development (ATDD), which was presented by Ken Pugh. It would provide a different way for Caktus to view testing and would help developers and testers to better understand our customer’s needs prior to software implementation. While ATDD is not new, it would be new for Caktus and would result in an altered workflow and a shift in mindset regarding testing. If we move forward with it, it will be interesting to see the results of this Agile innovation.

Caktus GroupWhite Space Explained

What White Space Is

In the context of web design, white space (or negative space) is the space around and between elements on a page. To non-designers, it may seem unnecessary or an expression of a particular aesthetic (and therefore non-essential to a web page). To designers, it is an essential tool to increase the comprehension of a composition and guide a viewer’s attention and focus.

What White Space Does

While white space may evoke a sense of elegance and sophistication, that is not its primary purpose from the perspective of user experience. White space helps the user understand the interface without undue effort; it reduces cognitive load and, as a result, greatly improves the quality of the experience.

Micro white space -- the white space between smaller elements on the page (e.g., characters or lines of text) -- improves legibility.

Using micro white space to improve legibility in smaller web page elements. Screenshots of two versions of the same web page with different amounts of micro white space in line height): image on the left shows insufficient line height, image on the right shows a comfortable line height.

Macro white space -- the white space between and around larger interface elements (e.g., paragraphs of text or graphics) or groups of elements (e.g., a section of an article or a web form) -- helps direct attention to those elements and improves comprehension. For example, a study by Dmitry Fadeyev demonstrated a twenty percent increase in comprehension due to proper use of white space between and around text elements.

Using macro white space to improve comprehension of a page. Screenshots of two versions of the same web page with different amounts of macro white space (margins between paragraphs and list items): image on the left shows lack of margins, image on the right shows paragraphs and list items separated by margins.

How White Space Works

White space works to improve legibility and comprehension in three major ways. It:

White space reduces cognitive load by increasing scannability of a web page

Properly applied white space supports scannability, an objective long postulated by Nielsen Norman Group (NNGroup). The results of their study from 1997 still hold true today: most users do not read web pages word-by-word. Instead, they scan them for specific words and sentences.

Separating chunks of text by a sufficient amount of white space makes scanning easier, thus decreasing the amount of strain the user experiences in search for content they seek on a page.

White space clarifies relationships by fostering the perceptual principle of proximity

Two Gestalt principles of perception -- proximity and figure-background -- rely on white space.

The principle of proximity states that “objects that are closer together are perceived as more related than objects that are further apart”[1]. That means that by increasing white space between elements on the page, we signal to the user that those elements are less (or not) related to one another. By bringing elements closer together, we indicate they have a closer relationship.

Consider the following web form example. The even spacing between form elements on the left offers no visual cue about their relationships. On the other hand, an increase in white space between form sections on the right (accompanied by an appropriate use of headings) makes relationships between those elements much clearer.

Demonstration of how white space improves form comprehension.

White space guides attention and focus by strengthening visual cues that support figure-background separation

The figure-background principle states that “elements are perceived as either figure (the element in focus) or ground (the background on which the figure rests)”[2]. Whether we perceive something as a figure or its background is a result of how our brain interprets cues carried by objects of perception. Size, color, and edge are among visual properties that help us interpret an object as a figure. And a figure is where our attention tends to focus. By skillfully applying white space, we can therefore direct the user’s attention to parts of a layout we want them to look at, and guide them through an interface to tasks we want them to complete on a page.

Consider two search engine pages shown below. The amount of white space the Google page employs leaves no ambiguity about what user action should be taken.

Comparison of white space as used by search engines Yahoo and Google. Screenshots of search engine pages: Yahoo on the left, Google on the right.


At an intersection of design and development, compromises must be made to meet budgetary constraints and deadlines. At the same time, we must recognize that many design choices in areas of web and user interface design are about minimizing cognitive load and facilitating comprehension. The long-term benefit of retaining and attracting users outweighs short-term cost of a precise implementation of design choices critical to quality of user experience. White space is an important aspect of improving user experience, but it’s not the only one. Learn more about principles of good user experience from an earlier blog post.

1: “Design Principles: Visual Perception And The Principles Of Gestalt: Proximity,” Steven Bradley, Smashing Magazine, March 28, 2014.

2: “Design Principles: Visual Perception And The Principles Of Gestalt: Figure/Ground,” Steven Bradley, Smashing Magazine, March 28, 2014.

Caktus GroupCSS Tip: Fixed Headers and Section Anchors

Fixed headers are a common design pattern that keeps navigation essentials in easy reach as users meander down a page. Keeping a header fixed as the user scrolls can free up horizontal space for smaller devices by avoiding sidebars, and keeps your branding visible.

Anchors are another important navigation tool, linking not to a page but to a specific location in it. Whether for a long article, multiple parts of documentation, or navigation within a page broken up into sections, anchors can help users navigate directly to the part of a page they want to see.

Linking within a page is a natural case for using a fixed header. Users who follow links from other websites and land directly on an anchor on your web page can’t see your branding, your site navigation, or even what site they’ve landed on. Introducing a fixed header helps them see where they’ve navigated to, no matter where they’re taken on the landing page.

Unfortunately, internal linking and a fixed header pose a problem when used together.

The Problem

A fixed header overlapping a target.

Here we see a simple <a name=”target”> anchor, which ends up behind our header, made translucent here for demonstration purposes. This happens because the browser navigates to the anchor by scrolling directly to it, but scrolling that far down puts the anchor visually right under the header. That’s a problem.

The Goal

A target behaving as desired under a fixed header.

This is what we want to see, with our anchor appearing just below the fixed header. The anchor is outlined in blue. You can see here how the section before the anchor is properly behind the fixed header, and the anchor is positioned just under it as if the top of the page starts just at the header’s bottom edge.

The Trick

We can make this happen with a little CSS trick. First, look at where we actually want the top of the page to appear so that our anchor appears in the right place.

Implementing the targeting trick

To make this happen we’re going to trick the browser into thinking the anchor is shifted above the visual location, by exactly the same height as the header we need it to appear under.

Just to set a baseline for this, we’ll look at how the header we’re working around is actually setup.

header {
  width: 100%;
  background: lightblue;
  padding: 10px;
  margin-bottom: 10px;
  position: fixed;
  height: 30px;

The header is styled to affix itself to the top of the window as the user scrolls. This header is 30 pixels tall, has a 10 pixel padding, and a 10 pixel margin on the bottom to separate it from the rest of the page content a bit. The box layout of the header is illustrated below.

The Details

The box model showing how the target and fixed header are spaced.

If our “real anchor” needs to line up with the header and our “visible anchor” needs to appear just below it, then we need to position them apart by the total of the header’s height, margin, and padding. In our case, that makes an offset of 60 pixels.

Here’s our anchor’s styling:

.anchor a {
  position: absolute;
  left: 0px;
  top: -60px;

The visual label doesn’t need any special styling, but we do need to arrange them as siblings in the markup with a container. Not the &nbsp; in the otherwise empty anchor! This is important, as the browser won’t see the anchor as valid without some contents to navigate to.

<div class="anchor">
  <a name="target"> </a>
  <h2 class="target-label">I am a good target</h2>

And the container just needs to be made a positioned element to allow our hidden anchor to be positioned relative to it, along with the visual label.

.anchor {
  position: relative;
The end result of implementing the target and fixed header as explained.

You can see the whole effect demonstrated on CodePen.

For more front-end tips, check out this post about making a jQuery, or this one about CSS Grid versus frameworks.

Caktus GroupAutomating Dokku Setup with AWS Managed Services

Dokku is a great little tool. It lets you set up your own virtual machine (VM) to facilitate quick and easy Heroku-like deployments through a git push command. Builds are fast, and updating environment variables is easy. The problem is that Dokku includes all of your services on a single instance. When you run your database on the Dokku instance, you risk losing it (and any data that's not yet backed up) should your VM suddenly fail.

Enter Amazon Web Services (AWS). By creating your database via Amazon's Relational Database Service (RDS), you get the benefit of simple deploys along with the redundancy and automated failover that can be set up with RDS. AWS, of course, includes other managed services that might help reduce the need to configure and maintain extra services on your Dokku instance, such as ElastiCache and Elasticsearch.

I've previously written about managing your AWS container infrastructure with Python and described a new project I'm working on called AWS Web Stacks. Sparked by some conversations with colleagues at the Caktus office, I began wondering if it would be possible to use a Dokku instance in place of Elastic Beanstalk (EB) or Elastic Container Service (ECS) to help simplify deployments. It turns out that it is not only possible to use Dokku in place of EB or ECS in a CloudFormation stack, but doing so speeds up build and deployment times by an order of magnitude, all while substituting a simple, open source tool for what was previously a vendor-specific resource. This "CloudFormation-aware" Dokku instance accepts inputs via CloudFormation parameters, and watches the CloudFormation stack for updates to resources that might result in changes to its environment variables (such as DATABASE_URL).

The full code (a mere 277 lines as of the time of this post) is available on GitHub, but I think it's helpful to walk through it section by section to understand exactly how CloudFormation and Dokku interact. The original code and the CloudFormation templates in this post are written in troposphere, a library that lets you create CloudFormation templates in Python instead of writing JSON manually.

First, we create some parameters so we can configure the Dokku instance when the stack is created, rather than opening up an HTTP server to the public internet.

key_name = template.add_parameter(Parameter(
    Description="Name of an existing EC2 KeyPair to enable SSH access to "
                "the AWS EC2 instances",
    ConstraintDescription="must be the name of an existing EC2 KeyPair."

dokku_version = template.add_parameter(Parameter(
    Description="Dokku version to install, e.g., \"v0.10.4\" (see "

dokku_web_config = template.add_parameter(Parameter(
    Description="Whether or not to enable the Dokku web config "
                "(defaults to false for security reasons).",
    AllowedValues=["true", "false"],

dokku_vhost_enable = template.add_parameter(Parameter(
    Description="Whether or not to use vhost-based deployments "
                "(e.g., foo.domain.name).",
    AllowedValues=["true", "false"],

root_size = template.add_parameter(Parameter(
    Description="The size of the root volume (in GB).",

ssh_cidr = template.add_parameter(Parameter(
    Description="CIDR block from which to allow SSH access. Restrict "
                "this to your IP, if possible.",

Next, we create a mapping that allows us to look up the correct AMI for the latest Ubuntu 16.04 LTS release by AWS region:

template.add_mapping('RegionMap', {
    "ap-northeast-1": {"AMI": "ami-0417e362"},
    "ap-northeast-2": {"AMI": "ami-536ab33d"},
    "ap-south-1": {"AMI": "ami-df413bb0"},
    "ap-southeast-1": {"AMI": "ami-9f28b3fc"},
    "ap-southeast-2": {"AMI": "ami-bb1901d8"},
    "ca-central-1": {"AMI": "ami-a9c27ccd"},
    "eu-central-1": {"AMI": "ami-958128fa"},
    "eu-west-1": {"AMI": "ami-674cbc1e"},
    "eu-west-2": {"AMI": "ami-03998867"},
    "sa-east-1": {"AMI": "ami-a41869c8"},
    "us-east-1": {"AMI": "ami-1d4e7a66"},
    "us-east-2": {"AMI": "ami-dbbd9dbe"},
    "us-west-1": {"AMI": "ami-969ab1f6"},
    "us-west-2": {"AMI": "ami-8803e0f0"},

The AMIs can be located manually via https://cloud-images.ubuntu.com/locator/ec2/, or programmatically via the JSON-like data available at https://cloud-images.ubuntu.com/locator/ec2/releasesTable.

To allow us to access other resources (such as the S3 buckets and CloudWatch Logs group) created by AWS Web Stacks we also need to set up an IAM instance role and instance profile for our Dokku instance:

instance_role = iam.Role(
         assets_management_policy,  # defined in assets.py
         logging_policy,  # defined in logs.py

 instance_profile = iam.InstanceProfile(

Next, let's set up a security group for our instance, so we can limit SSH access only to our IP(s) and open only ports 80 and 443 to the world:

security_group = template.add_resource(ec2.SecurityGroup(
    GroupDescription='Allows SSH access from SshCidr and HTTP/HTTPS '
                     'access from anywhere.',

Since EC2 instances themselves are ephemeral, let's create an Elastic IP that we can keep assigned to our current Dokku instance, in the event the instance needs to be recreated for some reason:

eip = template.add_resource(ec2.EIP("Eip"))

Now for the EC2 instance itself. This resource makes up nearly half the template, so we'll take it section by section. The first part is relatively straightforward. We create the instance with the correct AMI for our region; the instance type, SSH public key, and root volume size configured in the stack parameters; and the security group, instance profile, and VPC subnet we defined elsewhere in the stack:

ec2_instance_name = 'Ec2Instance'
ec2_instance = template.add_resource(ec2.Instance(
    ImageId=FindInMap("RegionMap", Ref("AWS::Region"), "AMI"),
    # ...

Next, we define a CreationPolicy that allows the instance to alert CloudFormation when it's finished installing Dokku:

ec2_instance = template.add_resource(ec2.Instance(
    # ...
            Timeout='PT10M',  # 10 minutes
    # ...

The UserData section defines a script that is run when the instance is initially created. This is the only time this script is run. In it, we install the CloudFormation helper scripts, execute a set of scripts that we define later, and signal to CloudFormation that the instance creation is finished:

ec2_instance = template.add_resource(ec2.Instance(
    # ...
    UserData=Base64(Join('', [
        # install cfn helper scripts
        'apt-get update\n',
        'apt-get -y install python-pip\n',
        'pip install https://s3.amazonaws.com/cloudformation-examples/'
        'cp /usr/local/init/ubuntu/cfn-hup /etc/init.d/cfn-hup\n',
        'chmod +x /etc/init.d/cfn-hup\n',
        # don't start cfn-hup yet until we install cfn-hup.conf
        'update-rc.d cfn-hup defaults\n',
        # call our "on_first_boot" configset (defined below):
        'cfn-init --stack="', Ref('AWS::StackName'), '"',
        ' --region=', Ref('AWS::Region'),
        ' -r %s -c on_first_boot\n' % ec2_instance_name,
        # send the exit code from cfn-init to our CreationPolicy:
        'cfn-signal -e $? --stack="', Ref('AWS::StackName'), '"',
        ' --region=', Ref('AWS::Region'),
        ' --resource %s\n' % ec2_instance_name,
    # ...

Finally, in the MetaData section, we define a set of cloud-init scripts that (a) install Dokku, (b) configure global Dokku environment variables with the environment variables based on our stack (e.g., DATABASE_URL, CACHE_URL, ELASTICSEARCH_ENDPOINT, etc.), (c) install some configuration files needed by the cfn-hup service, and (d) start the cfn-hup service:

ec2_instance = template.add_resource(ec2.Instance(
    # ...
                on_first_boot=['install_dokku', 'set_dokku_env', 'start_cfn_hup'],
                    '01_fetch': {
                        'command': Join('', [
                            'wget https://raw.githubusercontent.com/dokku/dokku/',
                        'cwd': '~',
                    '02_install': {
                        'command': 'sudo -E bash bootstrap.sh',
                        'env': {
                            'DOKKU_TAG': Ref(dokku_version),
                            'DOKKU_VHOST_ENABLE': Ref(dokku_vhost_enable),
                            'DOKKU_WEB_CONFIG': Ref(dokku_web_config),
                            'DOKKU_HOSTNAME': domain_name,
                            # use the key configured by key_name
                            'DOKKU_KEY_FILE': '/home/ubuntu/.ssh/authorized_keys',
                            # should be the default, but be explicit just in case
                            'DOKKU_SKIP_KEY_FILE': 'false',
                        'cwd': '~',
                    '01_set_env': {
                        # redirect output to /dev/null so we don't write
                        # environment variables to log file
                        'command': 'dokku config:set --global {} >/dev/null'.format(
                            ' '.join(['=$'.join([k, k]) for k in dict(environment_variables).keys()]),
                        'env': dict(environment_variables),
                    '01_start': {
                        'command': 'service cfn-hup start',
                    '/etc/cfn/cfn-hup.conf': {
                        'content': Join('', [
                            'stack=', Ref('AWS::StackName'), '\n',
                            'region=', Ref('AWS::Region'), '\n',
                            'interval=1\n',  # check for changes every minute
                        'mode': '000400',
                        'owner': 'root',
                        'group': 'root',
                    '/etc/cfn/hooks.d/cfn-auto-reloader.conf': {
                        'content': Join('', [
                            # trigger the on_metadata_update configset on any
                            # changes to Ec2Instance metadata
                            'path=Resources.%s.Metadata\n' % ec2_instance_name,
                            ' --stack=', Ref('AWS::StackName'),
                            ' --resource=%s' % ec2_instance_name,
                            ' --configsets=on_metadata_update',
                            ' --region=', Ref('AWS::Region'), '\n',
                        'mode': '000400',
                        'owner': 'root',
                        'group': 'root',
    # ...

The install_dokku and start_cfn_hup scripts are configured to run only the first time the instance boots, whereas the set_dokku_env script is configured to run any time any metadata associated with the EC2 instance changes.

Want to give it a try? Before creating a stack, you'll need to upload your SSH public key to the Key Pairs section of the AWS console so you can select it via the KeyName parameter. Click the Launch Stack button below to create your own stack on AWS. For help filling in the CloudFormation parameters, refer to the Specify Details section of the post on managing your AWS container infrastructure with Python. If you create a new account to try it out, or if your account is less than 12 months old and you're not already using free tier resources, the default instance types in the stack should fit within the free tier, and unneeded services can be disabled by selecting (none) for the instance type.


Once the stack is set up, you can deploy to it as you would to any Dokku instance (or to Heroku proper):

ssh dokku@<your domain or IP> apps:create python-sample
git clone https://github.com/heroku/python-sample.git
cd python-sample
git remote add dokku dokku@<your domain or IP>:python-sample
git push dokku master

Alternatively, fork the aws-web-stacks repo on GitHub and adjust it to suit your needs. Contributions welcome.

Good luck and have fun!

Caktus GroupUser-Centered Navigation Design

Designing navigation that will support the needs of website users is one of the more important aspects of site usability. At Caktus we practice iterative, user-centered navigation design, which includes user feedback.

Identify Content Categories Through Card Sorting

Before devising a way for users to navigate content, it’s a good idea to make sure that the content is organized in a way that makes sense to them. Better yet, find out how users would categorize content. One way to do this is through card sorting.

There are three methods to carry out a card sorting study:

  • Open: Make a list of labels representing pieces of content and let users create and name their own categories to organize the labels.

An example of open card sorting. Screen capture of an open card sorting interface in OptimalWorkshop.com

  • Closed: Provide users with a list of categories along with the labels representing the pieces of your content, and allow them to categorize the content labels into the provided categories.

An example of closed card sorting. Screen capture of a closed card sorting interface in OptimalWorkshop.com

  • Hybrid: Provide a set of predetermined categories and allow users to create their own categories. Let them organize content labels into the predetermined categories and/or their own categories.

An example of hybrid card sorting. Screen capture of a hybrid card sorting interface in OptimalWorkshop.com

A card sorting study will reveal how users think about categorizing content. It can be conducted with index cards or sticky notes, or with a digital tool. Advantages that digital tools offer include the ability to conduct remote studies and a quick analysis of results. For example, they can display the results of a card sort as a matrix showing what percentage of users placed which piece of content into which category. At Caktus, we use OptimalWorkshop for card sorting, as well as for treejack and first-click testing (described below).

Pro tip: If, prior to card sorting, you have already established guidelines for controlled vocabulary on your website, a closed card sorting may be a good choice. If you are still deciding on terminology, learning the words your audience uses to describe your content in an open card sorting study may help to provide invaluable insights.

A popular placements matrix showing the results of a card sorting exercise. Screen capture of Popular Placements Matrix in OptimalWorkshop.com

Validate the Content Organization Through Treejack Testing

Once you come up with the first iteration of content organization, it is a good idea to validate that content structure through treejack testing (also known as “reverse card sorting”).

In treejack, you build a tree-like structure out of labels representing content. A treejack consists of nested levels of content labels that mimic your intended information architecture. During the test, users are asked to find specific pieces of content within that tree.

An example of a treejack test. Screen capture of a treejack interface in OptimalWorkshop.com

If the treejack is based on results from a card sorting study, you might expect that users find content labels exactly where you put them. Let go of that expectation. No classification of content you may come up with, even with the help of users, is going to be perfect. It’s more useful to think of treejack as another opportunity to refine your content organization.

Pro tip: What if the results of a treejack test contradict the results of a card sorting study? That may happen, especially if both studies are qualitative, meaning that they rely on a small sample group of users. That means your job is not done, and you should continue to tweak and test.

Continue with First-Click Testing

When searching for content on a live website users rely on a number of cues offered by good design. Those cues are absent in treejack testing and that may be a factor preventing users from being successful. Continue testing content organization by giving the users additional context. Asking users where they would click within a static mockup to find a specific piece of content can offer insights into users’ mental models. This may help resolve any ambiguities between card sorting and treejack testing.

Pro tip: When coming up with tasks for a first-click test, avoid using words that are present in links and buttons in the interface design that you are testing. For example, if you are testing a “Contact Us” button, don’t ask the user, “Where would you click to contact this company?” Instead, ask, “Where would you click to get in touch with this company?” Also, avoid asking leading questions. For example, instead of asking, “Would you look for squash under vegetables or under meat?” ask, “Where would you click to find squash?

An example of first click testing, asking where a user would click to learn what Caktus Group does. Screen capture of a first click testing interface in OptimalWorkshop.com

Conduct Usability Testing on a Live Interface

By the time a design is translated into code, it should have iterated on the organization of content and the navigation pattern based on results from card sorting, treejack, and first-click testing. Now a live interface can be tested, which adds a new dimension that may facilitate or hamper users’ ability to navigate. Usability testing on a live interface is a chance to find out how your design decisions hold up.

A usability test for a to-do app. Image source: Validately.com

Pro tip: If previous user tests left you with unanswered questions about content categorization, begin by focusing on tasks that will help you resolve those questions. Use the same or similar tasks to those you gave users during the first-click testing. Pay attention not only to what users do, but also to what they say in order to understand the mental models that guide their interactions with the interface.


The process of organizing content and identifying navigation patterns that will support user goals is messy (learn more about how to make sense of any mess from Abby Covert). There is no perfect solution. The best option is to identify common mental models and patterns, and find your content structure and navigation pattern in that knowledge. The tricky part in qualitative studies is to figure out what is a quirk and what is a pattern. Repetitive testing with a small sample group of users is a good way to come closer to the answers you seek.

Interested in learning more about UX? We have posts about product discovery, the principles of good UX design, user story mapping, and more.

Caktus GroupEliciting Helpful Stakeholder Feedback

Client feedback is integral to the success of a project and as a product owner, obtaining it is part of your responsibility. Good feedback is not synonymous with positive or negative feedback. A client should feel empowered and comfortable enough to speak up when something isn’t right. If they wait to share their honest thoughts, there is a high chance it will cost more time and money to fix down the road.

Below are some suggestions to elicit better feedback from your clients. Here at Caktus we present our work in sprint reviews, but these tips can be applied anytime you are presenting work and require client feedback:

  • Start your presentation by being extremely clear on the goals of the meeting. Let the client know that the entire purpose is to get their feedback on the stories/features your team is presenting. Remind them that they will not hurt anyone’s feelings if they tell you what is not working for them (they should, of course, provide details and not just a generic, “This is terrible and wrong” comment).

  • Share only the stories or features that will elicit feedback. There is often work done in a sprint that does not have any user-visible components (e.g., technical debt). Feel free to let the client know that those items have been accomplished if you think it is relevant for them to know (after all, it was work that the team accomplished). However, spend the majority of the time sharing features that they can see and understand, and that do require their feedback.

  • Tell a story. Go through the completed work as a sequence of events as the user would experience them. Be careful not to just review the work ticket by ticket, but as a holistic version of how the overall feature works.

  • Show the functionality from the customer’s perspective, not from the code level.

  • Use real data, or at least data that makes sense. Populating the application with lorem ipsum or some other random dummy data will make it difficult for you to present the app in a way that makes sense for the client. For example, if you are creating an app for booking flights, you will want the data to reflect that (cities, times, airlines, dates), even if it is just placeholder data.

  • Ask your developer to tell the client why they developed a feature in the way that they did, how it benefits the user, and what kind of feedback is needed. “For this feature, we made it work this way because ABC. Does this accomplish what the user needs to do? What are we missing?”

  • Coach the client on the specific types of feedback that would be most helpful. “Here we are looking for specific feedback regarding the navigation,” or, “For this feature, how close is this layout to what you had envisioned?”

  • Ask for the feedback in an open-ended manner versus questions answerable with yes/no. “How do you envision the user would utilize this feature? In what ways might it be confusing for them? What might it need to do that it is currently not doing?”

  • Try to make the sprint review compelling, relevant to the audience, and at an appropriate technical level. This will help you keep people’s attention and ensure they are engaged enough to give the feedback you need.

Guiding your client in a way that helps them articulate and communicate what is working and what is not will help to ensure that you are building the product they want. Getting the feedback as early as possible helps the team do this within the time and budget allotted. Good feedback will lead to a good product.

Find more project management tips in this post about being a product owner in a client services organization.

Caktus GroupThe Importance of Developer Communities

Go to any major city and you will be able to find a user group for just about every major, modern programming language. Developers meet in their off hours to discuss what’s new in their language of choice, challenges they’ve encountered, and different ways of doing things. If you’ve never been to one of these groups, it might be easy to brush them off as an unimportant outlet where people talk in way too much detail about a geeky interest. Instead, most of the attendees are professionals who are looking to build skills and find new ways to solve problems.

Why do we sacrifice our personal time to discuss the things we do all day at work? Simply put, it makes us better programmers. When I attend a meetup or talk, even on topics which only have a small overlap with the code I write on a day-to-day basis, I always learn something new. I don’t need to jot it down or record it; I wouldn’t ever think to go back and reference those notes anyway. But weeks, months, or sometimes even years later, when I come across a hurdle which requires a creative solution, something may nag me from the back of my mind. “Remember you once heard a talk about _?” it asks. “Maybe a solution like that one would help here?” To Google I go, with a topic in mind that gives me a jumping-off point.

Having diverse little nuggets floating around in the recesses of my memory gives me a large bank of ideas to draw from when I need a solution I may not have used before. The memory may not even be directly applicable, but you’d be surprised how often there are parallel solutions in wildly different areas. These ideas don’t always pan out, but they get me past that coder’s block often enough to be very much worth the investment of my time.

Besides the benefits for an individual coder though, user groups for a programming language or industry can help the broader community of developers in a number of ways. Most transparently, these groups provide an obvious place for a new developer to meet people, learn more about their language(s) of choice, and get advice on how to gain the necessary skills to accomplish their goals. But there’s a much more subtle benefit going on here as well. By attending these groups and interacting with people who may not otherwise cross paths, even the most experienced coders can have their rigid ideas challenged and break out of restrictive thinking.

Every workplace settles into a culture, in which certain ideas and techniques are considered to be “best practices,” often for very good reason. It is all too easy for these best practices to get calcified into “the way we’ve always done things,” and we all have stories about the perils of that line of thought. Developer communities, by providing a platform for developers to interact with a diversity of people and in less-structured contexts, allow them to break out of their workplace culture and have those ideas challenged. Sometimes, a developer will address the challenge and come out even more certain that their preferred methodology is really best, and other times they’ll come out thinking that there may be cases in which a different approach is a better solution. But no matter what, they will have had an opportunity to think through a practice that they’ve been executing for months or years without examining.

On occasion, revolutionary ideas come out of these groups, and gradually percolate through wider communities, but every single meeting contains a benefit for someone present. For these reasons and more, Caktus is proud to offer a space for these groups to meet, and to support the wider community. By welcoming discussion and learning into our Tech Space, we hope to encourage growth in individuals and the community, and challenge some of our own calcified ideas while we’re at it.

Caktus GroupCommuter Benefits and Encouraging Sustainable Commuting

Growth for Durham has meant a lot of great things for Caktus, from an expanding pool of tech talent to an increased interest in civic-minded tech solutions to shape the evolving community. This growth has also brought logistical challenges. Most recently, this meant providing adequate commuter support to our employees in a city whose transportation infrastructure is still nascent.

With limited available parking and an ever growing staff, we were unsure how best to tackle this problem. Rather than find additional parking where it didn’t yet exist, we began instead to investigate how we could potentially change commuting culture itself and create a more sustainable pathway for continued growth.

After careful examination and research, Caktus decided to vastly expand our commuter benefits. Beyond simply offering subsidized parking to our employees, starting in September of 2017, Cakti will have a range of benefits to choose from. For those employees who opt out of their parking benefit, they can choose instead to receive stipends for expenses related to biking or pre-tax contributions to help cover the cost of public transportation.

As part of this expansion, Caktus has also opted to build ties with local businesses and programs that offer additional perks to green commuters. Employees who choose to bike to work will become automatic members of Bicycle Benefits, an independent group that works with local businesses to offer perks and discounts to local bikers. We’ve also partnered with GoTriangle, the public face of the Research Triangle Public Transportation Authority, and their Emergency Ride Home and GoPerks programs to offer further aid, perks, and rewards to employees who choose greener commuting options.

By offering commuter benefits along with additional perks and rewards for green commuting, we hope to transition a number of our staff to greener modes of transportation. Not only will this provide a more sustainable growth plan in Durham’s increasingly urban environment, but it also encourages us to live up to what we value most as a company. We strive to do what’s best for the community, whether that be the thing that supports our employees or the thing that supports a local call for sustainable commuting. We hope this will be another step in that direction.

Interested in working for Caktus? Head to our Careers page to view our open positions.

Caktus GroupCaktus 10th Anniversary Event

Caktus turned ten this year and we recently celebrated with a party at our office in Durham, NC. We wouldn’t be where we are today without our employees, clients, family, and friends, so this wasn’t just a celebration of Caktus. It was a celebration of the relationships the company is built on.

Caktus party guests having a good time.

The last five years

Since our last milestone birthday celebration five years ago, Caktus has more than doubled in size, growing from 15 employees to 30-plus. Co-founder and CTO Colin Copeland was honored with the Triangle Business Journal’s 40 Under 40 award. The company itself moved from Carrboro to the historic building we now own in downtown Durham, where we’re pleased to be able to host local tech meetups when we’re not using it for our own special events.

Guests at the Caktus 10th anniversary party listening to a speech in the Tech Space.

In our work, we’ve continued our mission to use technology for good, building the world’s first SMS-based voter registration system, beginning the Epic Allies project to improve outcomes for young men with HIV/AIDS, and launching the Open Data Policing website for tracking police stop data.


Co-founder and CEO Tobias McNulty gave a speech to mark the occasion, sharing a view of how far the company has come. There was also enthusiasm for what we can achieve at Caktus in the next ten years - and those to come after.

Caktus co-founders Tobias McNulty and Colin Copeland.

As part of the celebrations we had food and birthday cupcakes, as well as prize giveaways for our team. Family and friends of Caktus employees joined in on the fun and games.

Caktus employees playing a game at the 10th-anniversary party.

We welcomed several clients as well, and we thank them along with all of those who have worked with us for giving us the opportunity to create meaningful tools that help people. A number of our clients have been with us for years, and we’re proud to have such a good relationship with those who trust us to build solutions for them.

Looking forward

The communities we’re a part of and the individuals in those communities have always been central to our focus. Growing sharp web apps is what we do, but it’s the people who build them and those we build them for that matter. With that in mind, we look forward to continuing to develop our internal initiatives around diverse representation, transparency and fair pay. We are also dedicated to continuing support of the various communities we are a part of, whether technical or geographic, through our charitable giving initiatives, conference and meetup sponsorships, open source contributions, and requiring a code of conduct to be in place and enforced at events we sponsor or attend.

Supportive, inclusive, and welcoming communities helped Caktus grow to where we are today, and we’re honored to be in a position to give back as we celebrate our tenth anniversary.

Credit for all photos: Madeline Gray.

Caktus GroupFalse Peaks and Temporary Code

In the day-to-day work of building new software and maintaining old software, we can easily lose sight of the bigger picture. I think we can find perspective when we step back and walk through the evolution of a single piece of software.

For example: first, you are asked for a simple slideshow to showcase a few images handed to you. Just five images and the images won't change.

An easy request! It only takes you a short time to build with some simple jQuery. You show the client, they approve it. You deploy it to production and call it a day.

This example, and all the examples in this blog post, are interactive. Try it out!

The next week, your client comes back with a new request. They don't think the users notice the slideshow can be navigated. They ask for previews of the next and last image, to use for navigation:


So you jump in. It’s an easy enough addition to the pretty simple slideshow widget you've already built. You slap in two images, position them just so, and add a few more lines of jQuery to bind navigation to them. Once again, it’s a quick review with the client and you ship it off to production.

But the next day, there's a bug report from a user. The slideshow works, the thumbnails show the right image, and the new previous/next preview images navigate correctly. However, the features don't work together, because the thumbnail navigation doesn’t change the new left and right preview images you added.

The client also wants the new navigation to act like a carousel and animate.

Now they want to add more photos.

And they want to be able to add, reorder, and change the photos on their own. That will break all the assumptions you made based on a fixed number of photos.

Every step along the way you added one more layer. One small request at a time, the overall complexity increased as those small requests added up. You took each step with the most efficient option you could find, but somehow you ended up with costly bloat. How does this keep happening?


At each step, you took the next best step. Ultimately, this didn't take you where you needed to go.

We don't subscribe to waterfall development practices at Caktus. Agile is a good choice, but as we work through iterations how do we bridge across those sprints to get a larger picture of our path, and how do we make those larger decisions about technical debt and decisions that impact on a larger scale than a single sprint?

Some of the code we write is there to get us somewhere else. Maybe you need to build a prototype to understand the original problem before you can tackle it with a final solution. Sometimes you need to stub out one area of code because your current task is another focus, and you'll come back to finish it or replace it in a future ticket. Many disciplines have these kinds of temporary artifacts, from the scaffolding built by construction crews to sketches artists make before paintings.

Maybe it is harder for software developers because we often don't know what code is temporary ahead of time. A construction crew would never say, "Now that we've built the roof, we really don't need those walls anymore!" but this is what it can often feel like when we refactor or tear down pieces of a project we worked hard on, even if it really is for the best.

My suggestion: become comfortable with tearing down code as part of the iterative process! Extreme Programming calls this rule "Relentlessly Refactor".

We need to think about some of the features we implement as prototypes, even if they've been shipped to end users. We won't always know when new code or features are stop-gaps or prototypes when we're building them, but we may realize later they had been so all along when more information comes to light about where those features need to go next.

Falling into the trap of thinking the work done in sprints is inherently additive is common, but destructive.

If each sprint is about "adding value", we tend to develop a bias towards the addition of artifacts. We consider modification to happen as part of bug fixes, seeing it as correcting a mistake in earlier requirements or code, or as changes stemming from evolving or misunderstood requirements. We may hold a bias against removing artifacts previously added, either within a given sprint or in a later sprint.

Going back to the construction analogy, when you construct a building you create a lot of things that don't end up in the final construction. You build scaffolding for your workers to stand on while the walls are being built up. You build wooden forms to shape concrete, removing them when the foundations and structures are solid.

You fill the building-in-progress with temporary lighting and wiring for equipment until the project is near completion and the permanent electrical and plumbing are hooked up and usable. A construction crew creates a lot of temporary artifacts in the path to creating a permanent structure, and we can learn from this when building software iteratively. We're going to have cases where work that supports what we're completing this sprint isn't necessary or may even be a hindrance in a future sprint. Not everything is permanent, and removing those temporary artifacts isn't a step backward or correcting a mistake. It is just a form of progress.

Jeff TrawickUpgrading from python-social-auth 0.2.19 to social-auth-core 1.4.0 + social-auth-app-django 1.2.0

I had a few issues with this many moons ago when I was trying the initial social-auth-core packaging. Yesterday I was able to get it to work with the latest version, which in turn allowed me to move from Django 1.10 to Django 1.11.
You will most likely encounter failed Django migrations when making the switch. Some posts on the 'net recommend first upgrading to an intermediate version of python-social-auth to resolve that, but I wanted a simpler production switchover, which I found in this social-app-django ticket. The eventual production deploy solution after testing locally with a copy of the production database was:
  1. Temporarily hack my Ansible deploy script to fail after updating the source tree and virtualenv for the new libraries but before running migrations.
  2. On the server, as the project user, run pip uninstall python-social-auth to delete the old package.
  3. On the server, make another copy of the production database and then run update django_migrations set app='social_django' where app='default'; via psql.
  4. On the server, as the project user, run python manage.py migrate social_django 0001 --fake.
  5. Remove the temporary fail from my Ansible deploy script.
  6. Run the deploy again, which will run the remaining migrations.

Caktus GroupQuick Tips: How to Change Your Name in JIRA

In May 2017, Atlassian rolled out the new Atlassian ID feature, which gives Atlassian product users a central Atlassian account that holds their user details. When this change occurred, our integration with G Suite combined with the Atlassian ID feature to result in some users with strange display names in JIRA, which I (as the JIRA admin) can’t fix since users now control their own profile. However, they don’t control their profile through JIRA. So, how does one change the names that display in JIRA for their users? (Hint: you can’t do it through User Management.)

Step 1. Go to https://id.atlassian.com/profile/profile.action and log in.

JIRA account settings page

Step 2. Enter your desired display name in the field labeled Full Name.

JIRA account settings with a name change.

Step 3. Click Save.

Step 4. Return to your JIRA instance. If your name has not updated, log out and then back in again.

Step 5. Revel in your new name.

Setting a new JIRA name.

Want more JIRA? Read up on how we made use of the priority field to ease team anxiety.

Caktus GroupTips for Product Ownership and Project Management in a Client Services Organization

Looking for some pointers to improve my own client management skills, I scoured the internet to find practical ideas on how to handle challenges better as the product owner (PO) in a client-services organization. I came up completely short.

Using Scrum in a client-services organization comes with its own unique challenges. As the product owner (PO), you are technically the project’s key stakeholder (along with many other responsibilities nicely outlined here). However, when serving as a PO with external clients you hold the responsibility, but not always the power, to make the final decisions on the priorities and final features of the product. Clients sometimes have an idea of what they want, but it may run counter to what you and your team recommend based on your experiences (which is why the client hired you in the first place! It is okay to offer alternatives to their requests, as long as you can back it up with facts). Ultimately the client makes the final decision, but it is our job to give them our best recommendations.

Some companies designate the client as the PO, with all of the responsibilities that go along with that. This approach is often not feasible at Caktus since our clients are off-site, not part of the Scrum team, and have many other external responsibilities that do not involve our project. The client is the subject expert, but not necessarily well-versed enough in Scrum or software development to have the skill set to be a good PO at a technical level.

Here are some tips that I think are helpful for working with non-technical, external clients when using Scrum.

Set and reinforce expectations

You can explain Scrum in detail and give real-world situations to help build an understanding of what it entails, but until a person works within that framework, their full grasp of it will be limited. If your client is working in a less technical environment, it is likely Scrum is new to them. Use any opportunity (discovery phase, Sprint Zero, every review and relevant communication) as an opportunity to underscore what you need from them as a client to help make this project successful. At Caktus, Scrum represents uncharted territory for many of our customers, but the process works because we treat each project as a learning opportunity, incrementally reinforcing the process and improving the agility of our partnership throughout the project.

Be transparent, but take into account the client’s background

In the name of transparency, we always offer clients full access to our ticket tracker and product backlog, a detailed release plan for the most valuable features listing out all the tickets we believe we can achieve within the sprints, a breakdown of and calendar invites for all the sprint activities for the team, and how the activities relate to their particular project (i.e., in backlog grooming we do ABC, in sprint planning we do XYZ, etc.).

Too much information, however, can be paralyzing. Get to know your client (how technical they are, how much time they have to be involved in the project, etc.) before deciding what information will be most helpful for them. The whole point is to create a product that delights the client, and make the process of getting there as smooth and easy as possible.

A client with limited technical knowledge may find digging through a product backlog requires more time than they have. Instead, you can give them consistent updates in other formats, even something as simple as a bulleted list. For example: “These are the tickets we are going to estimate in backlog grooming on Tuesday. Please review the user stories and the Acceptance Criteria (AC) to ensure it aligns with what you feel is important for this feature.” At Caktus, we typically take on the day-to-day management of the product backlog, based on our understanding of the project and the relative priorities communicated to us by our clients. For some clients this can take the place of having full access to everything, which at times serves more to overwhelm than to inform.

Similarly, the release plan should be built around certain features rather than specific tickets. Since a release plan is a best guess based on the initial estimates of the team and is constantly being adjusted, including features to be completed rather than specific tickets gives the team the means to focus attention on meeting the overarching project goals. Hewing to the release plan is not always possible, but when you can do it, it makes things less stressful for your client.

(Over) Communicate

There is a lot to accomplish in a sprint review meeting. You need to talk about what was accomplished, share it with the client, discuss their feedback on the completed work, talk about priorities for the upcoming sprint, and then possibly make adjustments based on the feedback that came out of the review. To help take the pressure off the client to review everything, give feedback, and think about next steps in a one-hour meeting, let clients know when features are ready for review on staging, in advance of the sprint review. That way they have ample time to play around with the features. By the time sprint review comes, they have a solid understanding of progress and we can use the sprint review to walk through specific feedback.

We recommend writing up your upcoming sprint goals as early as you can and sharing them ahead of time. It's important to note that these are only the goals, and that the team decides what they pull into the sprint. Then, after sprint planning, keep the client updated on which features your team was able to pull into the sprint so their expectations are set appropriately.

If you need something from a client, just ask. Explaining dependencies also helps (i.e., adjusting this feature too far down the road will be more expensive than fixing it now, so please give us feedback by X date so we can address it soon). Throughout my four-plus years at Caktus, I've found that technical expertise is only half the battle, and our most successful projects are those in which we stay in constant communication in with the client.

Compromise when it makes sense for the client and for your team

Some clients are not comfortable using or navigating the tools we use every day. Therefore, if it helps a client to, for example, download ticket details from JIRA into an Excel spreadsheet formatted in a way that allows them to understand something better, it is worth the extra time and effort. However, keep in mind the overall balance of time and effort. If they ask you to keep a shared spreadsheet updated in real time with all updates in JIRA, help them understand why that might not be a good idea, and come up with some alternative solutions to get them what they need.


Much of what is out there on the internet related to project ownership is related to being a PO at a software company, with internal stakeholders. Having external clients doesn’t make Scrum impossible; it just makes it a little bit more challenging, and requires some tweaking to keep your client - and your team - happy!

Caktus GroupAdvanced Django File Handling

Advanced Django File Handling

Modern Django's file handling capabilities go well beyond what's covered in the tutorial. By customizing the handlers that Django uses, you can do things pretty much any way you want.

Static versus media files

Django divides the files your web site is serving unchanged (as opposed to content delivered by your Django views) into two types.

  • "Static" files are files provided by you, the website developer. For example, these could be JavaScript and CSS files, HTML files for static pages, image and font files used to make your pages look nicer, sample files for users to download, etc. Static files are often stored in your version control system alongside your code.
  • "Media" files are files provided by users of the site, uploaded to and stored by the site, and possibly later served to site users. These can include uploaded pictures, avatars, user files, etc. These files don't exist until users start using the site.

Two jobs of a Django storage class

Both kinds of files are managed by code in Django storage classes. By configuring Django to use different storage classes, you can change how the files are managed.

A storage class has two jobs:

  • Accept a name and a blob of data from Django and store the data under that name.
  • Accept a name of a previously-stored blob of data, and return a URL that when accessed will return that blob of data.

The beauty of this system is that our static and media files don't even need to be stored as files. As long as the storage class can do those two things, it'll all work.


Given all this, you'd naturally conclude that if you've changed STATICFILES_STORAGE and DEFAULT_FILE_STORAGE to storage classes that don't look at the STATIC_URL, STATIC_ROOT, MEDIA_URL, and MEDIA_ROOT settings, you don't have to set those at all.

However, if you remove them from your settings, and try to use runserver, you'll get errors. It turns out that when running with runserver, django.contrib.staticfiles.storage.StaticFilesStorage is not the only code that looks at STATIC_URL, STATIC_ROOT, MEDIA_URL, and MEDIA_ROOT.

This is rarely a problem in practice. runserver should only be used for local development, and when working locally, you'll most likely just use the default storage classes for simplicity, so you'll be configuring those settings anyway. And if you want to run locally in the exact same way as your deployed site, possibly using other storage classes, then you should be running Django the same way you do when deployed as well, and not using runserver.

But you might run into this in weird cases, or just be curious. Here's what's going on.

When staticfiles is installed, it provides its own version of the runserver command that arranges to serve static files for URLs that start with STATIC_URL, looking for those files under STATIC_ROOT. (In other words, it's bypassing the static files storage class.) Therefore, STATIC_URL and STATIC_ROOT need to be valid if you need that to work. Also, when initialized, it does some sanity checks on all four variables (STATIC_URL, STATIC_ROOT, MEDIA_URL, and MEDIA_ROOT), and the checks assume those variables' standard roles, even if the file storage classes have been changed in STATICFILES_STORAGE and/or DEFAULT_FILE_STORAGE.

If you really need to use runserver with some other static file storage class, you can either configure those four settings to something that'll make runserver happy, or use the --nostatic option with runserver to tell it not to try to serve static files, and then it won't look at those settings at startup.

Using media files in Django

Media files are typically managed in Python using FileField and ImageField fields on models. As far as your database is concerned, these are just char columns storing relative paths, but the fields wrap that with code to use the media file storage class.

In a template, you use the url attribute on the file or image field to get a URL for the underlying file.

For example, if user.avatar is an ImageField on your user model, then

<img src="{{ user.avatar.url }}">

would embed the user's avatar image in the web page.

The default storage class for media, django.core.files.storage.FileSystemStorage, saves files to a path inside the local directory named by MEDIA_ROOT, under a subdirectory named by the field's upload_to value. When the file's url attribute is accessed, it returns the value of MEDIA_URL, prepended to the file's path inside MEDIA_ROOT.

An example might help. Suppose we have these settings:

MEDIA_ROOT = '/var/media/'
MEDIA_URL = '/media/'

and this is part of our user model:

avatar = models.ImageField(upload_to='avatars')

When a user uploads an avatar image, it might be saved as /var/media/avatars/12345.png. That's MEDIA_ROOT, plus the value of upload_to for this field, plus a filename (which is typically the filename provided by the upload, but not always).

Then <img src="{{ user.avatar.url }}"> would expand to <img src="/media/avatars/12345.png">. That's MEDIA_URL plus upload_to plus the filename.

Now suppose we've changed DEFAULT_FILE_STORAGE to some other storage class. Maybe the storage class saves the media files as attachments to email messages on an IMAP server - Django doesn't care.

When 12345.png is uploaded to our ImageField, Django asks the storage class to save the contents as avatars/12345.png. If there's already something stored under that name, Django will change the name to come up with something unique. Django stores the resulting filename in the database field. And that's all Django cares about.

Now, what happens when we put <img src="{{ user.avatar.url }}"> in our template? Django will retrieve the filename from the database field, pass that filename (maybe avatars/12345.png) to the storage class, and ask it to return a URL that, when the user's browser requests it, will return the contents of avatars/12345.png. Django doesn't know what that URL will be, and doesn't have to.

For more on what happens between the user submitting a form with attached files and Django passing bits to a storage class to be saved, you can read the Django docs about File Uploads.

Using Static Files in Django

Remember that static file handling is controlled by the class specified in the settings STATICFILES_STORAGE.

Media files are loaded into storage when users upload files. Static files are provided by us, the website developers, and so they can be loaded into storage beforehand.

The collectstatic management command finds all your static files, and saves each one, using the path relative to the static directory where it was found, into the static files storage. [2]

By default, collectstatic looks for all the files inside static directories in the apps in INSTALLED_APPS, but where it looks is configurable - see the collectstatic docs.

So if you have a file myapp/static/js/stuff.js, collectstatic will find it when it looks in myapp/static, and save it in static files storage as js/stuff.js.

You would most commonly access static files from templates, by loading the static templatetags library and using the static template tag. For our example, you'd ask Django to give you the URL where the user's browser can access js/stuff.js by using {% static 'js/stuff.js' %} in your template. For example, you might write:

{% load 'static' %}
<script src="{% static 'js/stuff.js' %}"></script>

If you're using the default storage class and STATIC_URL is set to http://example.com/, then that would result in:

<script src="http://example.com/js/stuff.js"></script>

Maybe then you deploy it, and are using some fancy storage class that knows how to use a CDN, resulting in:

<script src="http://23487234.niftycdn.com/239487/230498234/js/stuff.js"></script>

Other neat tricks can be played here. A storage class could minimize your CSS and JavaScript, compile your LESS or SASS files to CSS, and so forth, and then provide a URL that refers to the optimized version of the static file rather than the one originally saved. That's the basis for useful packages like django-pipeline.

[2]collectstatic uses some optimizations to try to avoid copying files unnecessarily, like seeing if the file already exists in storage and comparing timestamps to the origin static file, but that's not relevant here.

If you’re looking for more Django tips, we have plenty on our blog.

Caktus GroupDjangoCon 2017 Recap

Mid-August brought travel to Spokane for several Caktus staff members attending DjangoCon 2017. As a Django shop, we were proud to sponsor and attend the event for the eighth year.

Meeting and Greeting

We always look forward to booth time as an opportunity to catch up with fellow Djangonauts and make new connections. Caktus was represented by a team of six this year: Charlotte M, Karen, Mark, Julie, Tobias, and Whitney. We also had new swag and a GoPro Session to give away. Our lucky winner was Vicky. Congratulations!

Winner of our DjangoCon 2017 prize giveaway.

This year we also had a special giveaway: one free ticket to the conference, donated to DjangoGirls Spokane. The winner, Toya, attended DjangoCon for the first time. We hope she had fun!

Top Talks

Our technical staff learned a lot from attending the other talks presented during the conference. Their favorite talks included the keynote by Alicia Carr, The Denormalized Query Engine Design Pattern, and The Power and Responsibility of Unicode Adoption.

Charlotte delivered a well-received talk about writing an API for almost anything. We’ll add the video to this post as soon as it’s available in case you missed it.

Another excellent talk series presented at DjangoCon!

See You Next Time

As always, we had a great time at DjangoCon and extend our sincere thanks to the organizers, volunteers, staff, presenters, and attendees. It wouldn’t be the same conference without you, and we look forward to seeing you at next year’s event.

Caktus GroupLetting Go of JIRA: One Team's Experiment With a Physical Sprint Board

At Caktus, each team works on multiple client-service projects at once, and it’s sometimes challenging to adapt different clients’ various tools and workflows into a single Scrum team’s process.

One of our Scrum teams struggled with their digital issue tracker; we use JIRA to track most of our projects, including the all-important sprint board feature. However, one client used their own custom issue tracker, and it was not feasible to transfer everything to our JIRA instance. A challenge then arose: how do we visualize the work we are doing for this project on our own sprint board?

We stick with JIRA

Since the tasks were already tracked in the client’s tracker, we did not want to duplicate that effort in JIRA, and we were unable to find an existing app to integrate the two trackers so that the data would sync both ways. But we still wanted the work to be represented in our sprints since it took up a significant portion of the team’s time.

Initially, we included placeholder JIRA tickets in our sprint for each person who would work on this project. Those tickets were assigned story points relative to the time that person was planning to spend on it. Essentially, among our other projects’ tasks and stories, we also had distinct blocks of hours to represent the work being done on this separate project.

This solution started to cause some confusion when the team tried to relate story points directly to hours, and it didn’t add any real value since the placeholder tickets lacked any specificity, so we decided to stop using them altogether. As a result, this project was not represented at all on our sprint board or in our velocity, and we did not have a good overall picture of our sprint work. This hindered our transparency and visibility into the team’s workload, and hurt our ability to allocate time across projects effectively (take a look at this post to see how we do that using tokens!).

We transition to a low-tech solution

Eventually, the team left JIRA behind and started using a physical whiteboard in the team room to visualize sprint work. The board allowed us to include tickets from our tracker and our client’s tracker in one central location.

A physical task board at Caktus.

We divided the board into the same columns that were on our JIRA sprint board to represent ticket status: To Do, In Progress, Pull Request, On Staging, Blocked, and Done. We use sticky notes to represent each user story, task, or bug, color-coded by project. Each sticky contains a ticket number that maps to the ticket in one of the trackers, a short title or description, and a story point estimate. We also started tracking sprint burndown and team velocity on large sticky sheets, also posted on the walls of the team room.

A physical sprint burndown chart at Caktus. A physical sprint burndown chart.

A physical team velocity chart at Caktus. A physical team velocity chart.

The physical board evolves

Including distinct tickets from the project in our sprints highlighted another challenge: the project’s priorities were determined by the client instead of by the team’s Product Owner, and the client did not use Scrum. This meant that the client changed the current priorities more frequently than our two-week sprint cadence, and the nature of the project was such that we had to keep up.

The team pointed out that we could not commit to finishing a specific set of tasks for that project since priorities at the beginning of the sprint were not fixed for the following two weeks (which is essential for carrying out a sprint effectively, as it allows the team to stay focused on a stable goal instead of having to shift gears often).

We decided that the best way to handle uncertain priorities was to divide the whiteboard into horizontal rows (or swimlanes), each with its own rules and expectations:

  • One swimlane for sprint work that we commit to finishing, and whose priorities do not change within the sprint.
  • A second swimlane for work that we want to make progress on but cannot commit to finishing in the sprint (mostly due to external dependencies).
  • A third swimlane for work that we have no control over, such as projects where priorities are not stable enough for two-week sprints, and the release day does not align with the end of our sprint. This swimlane uses more of a Kanban workflow, minus the work in progress limits.

All of the team’s projects are now represented with tickets that map to distinct user stories, tasks, and bugs in one central place, giving the team full visibility into the work being done during the sprint, without committing to work that is likely to fall in priority.

Where we are now

The team continues to work out the kinks of using a physical board, such as overlooking details that are included only in the issue trackers, needing to be physically in the team room to know what to work on next, updating tickets only once a day during standup, and sticky notes falling off the board when the room gets too hot.

We have also observed some distinct benefits to leaving JIRA behind:

  • We can easily include new projects that use any issue tracker into our physical sprint board;
  • The team is fully engaged with the physical artifacts and actively drive standups and sprint planning together, as opposed to having one person operate JIRA while everyone else watches;
  • The team enjoys moving the sticky notes along the board, and take satisfaction in updating the burndown chart (especially when it gets down to zero!);
  • They feel more freedom to experiment with the board, knowing that the possibilities are only limited by their imagination rather than the capabilities of the software.

I don’t know if the team will continue to use the whiteboard, if they will choose to go back to using JIRA’s sprint board, or if they will come up with some other solution; but as their Scrum Master, I have appreciated the journey, the team’s willingness to experiment and try new things, and their creativity in overcoming the challenges they encountered.

We didn’t always use Scrum at Caktus - check out this blog post to learn how we got started.

Caktus GroupShipIt Day Recap Q3 2017

Caktus recently held the Q3 2017 ShipIt Day. Each quarter, employees take a step back from business as usual and take advantage of time to work on personal projects or otherwise develop skills. This quarter, we enjoyed fresh crêpes while working on a variety of projects, from coloring books to Alexa skills.

Technology for Linguistics

As both a linguist and a developer, Neil looked at using language technology for a larger project led by Western Carolina University to revitalize Cherokee. This polysynthetic language presents challenges for programming due to its complex word structure.

Using finite state morphology with hfst and Giellatekno, Neil explored defining sounds, a lexicon, and rules to develop a model. In the end, he feels a new framework could help support linguists, and says that Caktus has shown him the value of frameworks and good tooling that could be put to use for this purpose.

Front-end Style Guide Primer

front-end style guide Although design isn’t optional in product development, the Agile methodology doesn’t address user interface (UI) or user experience (UX) design. We use Agile at Caktus, but we also believe in the importance of solid UX in our projects.

Basia, Calvin, and Kia worked to fill the gap. They started building a front-end style guide, with the intention to supply a tool for Caktus teams to use in future projects. Among style guide components considered during this ShipIt Day were layout, typography, and color palettes. Calvin worked to set up the style guide as a standalone app that serves as a demo and testbed for ongoing style guide work. Kia explored the CSS grid as a flexible layout foundation that makes building pages easier and more efficient while accommodating a range of layout needs. Basia focused on typography, investigating responsive font sizing, modular scale, and vertical rhythm. She also started writing color palettes utilizing colors functions in Stylus.

Front-end style guides have long been advocated by Lean UX. They support modular design, enabling development teams to achieve UI and UX consistency across a project. We look forward to continuing this work and putting our front-end style guide into action!

Command Line Interface for Tequila

Jeff B worked on a command line interface to support our use of Tequila. While we currently use Fabric to execute shell commands, it’s not set up to work with Python 3 at the time of writing. Jeff used the Click library to build his project and incorporated difflib from the standard library in order to show a git-style diff of deployment settings. You can dig into the Tequila CLI on the Caktus GitHub account and take a look for yourself!

Wagtail Calendar

Caktus has built several projects using Wagtail CMS, so Charlotte M and Dmitriy looked at adding new functionality. Starting with the goal of incorporating a calendar into the Bakery project, they added an upcoming events button that opens a calendar of events, allowing users to edit and add events.

Charlotte integrated django-scheduler events into Wagtail while Dmitriy focused on integrating the calendar widget onto the EventIndexPage. While they encountered a few challenges which will need further work, they were able to demonstrate a working calendar at the end of ShipIt Day.

Scrum Coloring Book

Charlotte F and Sarah worked together to create a coloring book teaching Scrum information, principles, and diagrams in an easily-digested way. The idea was based on The Scrum Princess. Their story follows Alex, a QA analyst who joins a development team, through the entire process of completing a Scrum project.

Drafting out the Caktus Scrum coloring book.

Over the course of the day, they came up with the flow of the narrative, formatted the book so that all the images are on separate pages with story text and definitions/image to color. Any illustrators out there who want to help it come to life?

QA Test Case Tools

Gerald joined forces with Robbie, to follow up on Gerald’s project from our Q2 2017 ShipIt Day. This quarter, our QA analysts tinkered with QMetry, adding it to JIRA to see whether this could be the tool to take Caktus’ QA to the next level.

QMetry creates visibility for test cases related to specific user stories and adds a number of testing functions to JIRA, including the ability to group different scenarios by acceptance criteria and add bugs from within the interface when a test fails. Although there are a few configuration issues to be worked out, they feel that this tool does most of what they want to do without too much back-and-forth.

Wagtail Content Chooser

Phil also took the chance to do some work with Wagtail. Using the built-in page-chooser as a guide, he developed a content-chooser that shows all of the blocks associated with that page’s StreamFields. The app can get a content block with its own unique identifier and would enable the admin user to pull that content from other pages into a page being worked on. Next steps will be incorporating a save function.

Publishing an Amazon Alexa Skill

For those seeking inspiring quotes, David authored a skill for Amazon Alexa which would return a random quote from forismatic. An avid fan of swag socks, David came across the opportunity to earn some socks (and an Echo Dot) from Amazon if he submitted an Alexa skill and got it certified. He used the Flask app Flask-Ask to develop the skill rapidly, deployed it to AWS Lambda via Zappa, and is now awaiting certification (and socks). Caktus is an AWS Consulting Partner, so acquiring AWS Alexa development chops would present another service we could offer to clients.

Catching Up on Conferences

Dan caught up on videos of talks from conferences:

He also looked at the possibility of building a new package that preprocesses JavaScript and CSS, but after starting work he realized there’s a reason why existing packages are complicated and resolved to revisit this another time.

That’s all for now!

Although the ShipIt Day projects represent time away from client work, each project helps our team learn new skills or strengthen existing ones that will eventually contribute toward external work. We see it as a way to invest in both our employees and the company as a whole.

To see some of our skills at work in client projects, check out our Case Studies page.