A planet of blogs from our members...

Caktus GroupAnnouncing the New Durham Women in Tech (DWiT) Meetup

We’re pleased to officially announce the launch of a new meetup: Durham Women in Tech (DWiT). Through group discussions, lectures, panels, and social gatherings, we hope to provide a safe space for women in small and medium-sized Durham tech firms to share challenges, ideas, and solutions. We especially want to support women on the business side in roles such as operations, marketing, business development, finance, and project management.

A small group of us at Caktus decided to start DWiT after being unable to find a local group for those in similar positions to us: we work on the business side and, as part of a growing company, wear many hats. Our roles often include implementing new processes and policies, tasks that influence culture and corporate direction. We have a seat at the table, but it’s not always clear how to help our companies move forward. How do we work towards removing the barriers women face in the tech industry within our roles? How do we help ourselves and our teams when faced with gendered challenges?

By pulling together a group of similar women, we hope to pool everyone’s experiences into a shared resource. We’ve seen the power of communities for female developers through the organizations Caktus supports internationally and locally with mentors and sponsorship, including, amongst others, Girl Develop It RDU, PyLadies RDU, DjangoGirls, and Pearl Hacks. We’re looking forward to strengthening the resources for women in technology in Durham.

Our inaugural meeting is on Tuesday, May 26th at 6 pm. We will be discussing imposter syndrome, a name given for those unfortunate moments where one feels like an imposter, despite external evidence to the contrary. RSVP by joining our meetup group.

Caktus GroupKeynote by Catherine Bracy (PyCon 2015 Must-See Talk: 4/6)

Part four of six in our PyCon 2015 Must-See Series, a weekly highlight of talks our staff enjoyed at PyCon.

My recommendation would be Catherine Bracy's Keynote about Code for America. Cakti should be familiar with Code for America. Colin Copeland, Caktus CTO, is the founder of Code for Durham and many of us are members. Her talk made it clear how important this work is. She was funny, straight-talking, and inspirational. For a long time before I joined Caktus, I was a "hobbyist" programmer. I often had time to program, but wasn't sure what to build or make. Code for America is a great opportunity for people to contribute to something that will benefit all of us. I have joined Code for America and hope to contribute locally soon through Code for Durham.

Caktus GroupQ2 2015 ShipIt Day ReCap

Last Friday everyone at Caktus set aside their regular client projects for our quarterly ShipIt Day, a chance for Caktus employees to take some time for personal development and independent projects. People work individually or in groups to flex their creativity, tackle interesting problems, or expand their personal knowledge. This quarter’s ShipIt Day saw everything from game development to Bokeh data visualization, Lego robots to superhero animation. Read more about the various projects from our Q2 2015 ShipIt Day.


Victor worked on our version of Ultimate Tic Tac Toe, a hit at PyCon 2015. He added in Jeff Bradbury’s artificial intelligence component. Now you can play against the computer! Victor also cleaned up the code and open sourced the project, now available here: github.com/caktus/ultimatetictactoe.

Philip dove into @total_ordering, a Python feature that fills in defining methods for sorting classes. Philip was curious as to why @total_ordering is necessary, and what might be the consequences of NOT using it. He discovered that though it is helpful in defining sorting classes, it is not as helpful as one would expect. In fact, rather than speeding things up, adding @total_ordering actually slows things down. But, he concluded, you should still use it to cover certain edge cases.

Karen updated our project template, the foundation for nearly all Caktus projects. The features she worked on will save us all a lot of time and daily annoyance. These included pulling DB from deployed environments, refreshing the staging environment from production, and more.

Erin explored Bokeh, a Python interactive data visualization library. She initially learned about building visualizations without javascript during PyCon (check out the video she recommended by Sarah Bird). She used Bokeh and the Google API to display data points on a map of Africa for potential use in one of our social impact projects.

Jeff B worked on Lisp implementation in Python. PyPy is written in a restricted version of Python (called RPython) and compiled down into highly efficient C or machine code. By implementing a toy version of Lisp on top of PyPy machinery, Jeff learned about how PyPy works.

Calvin and Colin built the beginnings of a live style guide into Caktus’ Django-project-template. The plan was loosely inspired by Mail Chimp's public style guide. They hope to eventually have a comprehensive guide of front-end elements to work with. Caktus will then be able to plug these elements in when building new client projects. This kind of design library should help things run smoothly between developers and the design team for front-end development.

Neil experimented with Mercury hoping the speed of the language would be a good addition to the Caktus toolkit. He then transitioned to building a project in Elm. He was able to develop some great looking hexagonal data visualizations. Most memorable was probably the final line of his presentation: “I was hoping to do more, but it turns out that teaching yourself a new programming language in six hours is really hard.” All Cakti developers nodded and smiled knowingly.

Caleb used Erlang and cowboy to build a small REST API. With more time, he hopes to provide a REST API that will provide geospatial searches for points of interest. This involves creating spatial indexes in Erlang’s built-in Mnesia database using geohashes.

Mark explored some of the issues raised in the Django-project-template and developed various fixes for them, including the way secrets are managed. Now anything that needs to be encrypted is encrypted with a public key generated when you bring up the SALT master. This fixes a very practical problem in the development workflow. He also developed a Django-project-template Heroku-style deploy, setting up a proof of concept project with a “git push” to deploy workflow.

Vinod took the time to read fellow developer Mark Lavin’s book Lightweight Django while I took up DRiVE by Daniel H. Pink to read about what motivates people to do good work or even complete rote tasks.

Scott worked with Dan to compare Salt states to Ansible playbooks. In addition, Dan took a look at Ember, working with the new framework as a potential for front-end app development. He built two simple apps, one for organizing albums in a playlist, and one for to-do lists. He had a lot of fun experimenting and working with the new framework.

Edward and Lucas built a minigame for our Epic Allies app. It was a fun, multi-slot, pinball machine game built with Unity3D.

Hunter built an HTML5 game using Phaser.js. Though he didn’t have the time to make a fully fledged video game, he did develop a fun looking boardgame with different characters, abilities, and animations.

NC developed several animations depicting running and jumping to be used to animate the superheros in our Epic Allies app. She loved learning about human movement, how to create realistic animations, and outputting the files in ways that will be useful to the rest of the Epic Allies team.

Wray showed us an ongoing project of his: a front-end framework called sassless, “the smallest CSS framework available.” It consists of front-end elements that allow you to set up a page in fractions so that they stay in position when resizing a browser window (to a point) rather than the elements stacking. In other words, you can build a responsive layout with a very lightweight CSS framework.

One of the most enertaining projects of the day was the collaboration between Rebecca C and Rob, who programmed Lego-bots to dance in a synced routine using the Lego NXT software. Aside from being a lot of fun to watch robots (and coworkers) dance, the presence of programmable Lego-bots prompted a much welcome visit from Calvin’s son Caelan, who at age of 9 is already learning to code!

Caktus GroupInteractive Data for the Web by Sarah Bird (PyCon 2015 Must-See Talk: 3/6)

Part three of six in our PyCon 2015 Must-See Series, a weekly highlight of talks our staff enjoyed at PyCon.

Sarah Bird's talk made me excited to try the Bokeh tutorials. The Bokeh library has very approachable methods for creating data visualizations inside of Canvas elements all via Python. No javascript necessary. Who should see this talk? Python developers who want to add a beautiful data visualization to their websites without writing any javascript. Also, Django developers who would like to use QuerySets to create data visualizations should watch the entire video, and then rewind to minute 8:50 for instructions on how to use Django QuerySets with a couple of lines of code.

After the talk, I wanted to build my own data visualization map of the world with plot points for one of my current Caktus projects. I followed up with one of the friendly developers from Continuum Analytics to find out that you do not need to spin up a separate Bokeh server to get your data visualizations running via Bokeh.

Astro Code SchoolFall Registration Now Open

Registration for the fall Python & Django Web Engineering class is open. You can fill out the application form on the Apply page and get more details on the application Process page. The deadline for applying is August 24, 2015. You can find a full syllabus for this class over on it's page be102.

This class is twelve weeks long and full time Monday to Friday from 9 AM – 5 PM. It'll be taught here at the Astro Code School at 108 Morris Street, Suite 1b, Durham, NC.

Python and Django make a powerful team to build maintainable web applications quickly. When you take this course you will build your own web application during lab time with assistance from your teacher and professional Django developers. You’ll also receive help preparing your portfolio and resume to find a job using the skills you’ve learned.

Please contact me if you have any questions.

Caktus GroupCakti Comment on Django's Class-based Views

After PyCon 2015, we were surprised when we realized how many Cakti who attended had all been asked about Django's class-based views (CBVs). We talked about why this might be, and this is a summary of what we came up with.

Lead Front-End Developer Calvin Spealman has noticed that there are many more tutorials on how to use CBVs than on how to decide whether to use them.

Astro Code School Lead Instructor Caleb Smith reminded us that while "less code" is sometimes given as an advantage of using CBVs, it really depends on what you're doing. Each case is different.

I pointed out that there seem to be some common misconceptions about CBVs.

Misconception: Functional views are deprecated and we're all supposed to be writing class-based views now.

Fact: Functional views are fully supported and not going anywhere. In many cases, they're a good choice.

Misconception: CBVs means using the generic class-based views that Django provides.

Fact: You can use as much or as little of Django's generic views as you like, and still be using class-based views. I like Vanilla Views as a simpler, easier to understand alternative to Django's generic views that still gives all the advantages of class-based views.

So, when to use class-based views? We decided the most common reason is if you want to reuse code across views. This is common, for example, when building APIs.

Caktus Technical Director Mark Lavin has a simple answer: "I default to writing functions and refactor to classes when needed writing Python. That doesn't change just because it's a Django view."

On the other hand, Developer Rebecca Muraya and I tend to just start with CBVs, since if the view will ever need to be refactored that will be a lot easier if it was split up into smaller bits from the beginning. And so many views fall into the standard patterns of Browse, Read, Edit, Add, and Delete that you can often implement them very quickly by taking advantage of a library of common CBVs. But I'll fall back to Mark's system of starting with a functional view when I'm building something that has pretty unique behavior.

Tim HopperHow I Became a Data Scientist Despite Having Been a Math Major

Caution: the following post is laden with qualitative extrapolation of anecdotes and impressions. Perhaps ironically (though perhaps not), it is not a data driven approach to measuring the efficacy of math majors as data scientists. If you have a differing opinion, I would greatly appreciate you to carefully articulate it and share it with the world.

I recently started my third "real" job since finishing school; at my first and third jobs I have been a "data scientist". I was a math major in college (and pretty good at it) and spent a year in the math Ph.D. program at the University of Virginia (and performed well there as well). These two facts alone would not have equipped me for a career in data science. In fact, it remains unclear to me that those two facts alone would have prepared me for any career (with the possible exception of teaching) without significantly more training.

When I was in college Business Week published an article declaring "There has never been a better time to be a mathematician." At the time, I saw an enormous disconnect between the piece and what I was being taught in math classes (and thus what I considered to be a "mathematician"). I have come across other pieces lauding this as the age of the mathematicians, and more often than not, I've wondered if the author knew what students actually studied in math departments.

The math courses I had as an undergraduate were:

  • Linear algebra
  • Discrete math
  • Differential equations (ODEs and numerical)
  • Theory of statistics 1
  • Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
  • Abstract algebra
  • Number theory
  • Real analysis
  • Complex analysis
  • Intermediate analysis (point set topology)

My program also required a one semester intro to C++ and two semesters of freshman physics. In my year as a math Ph.D. student, I took analysis, algebra, and topology classes; had I stayed in the program, my future coursework would have been similar: pure math where homework problems consistent almost exclusively of proofs done with pen and paper (or in LaTeX).

Though my current position occasionally requires mathematical proof, I suspect that is rare among data scientist. While the "data science" demarcation problem is challenging (and I will not seek to solve it here), it seems evident that my curriculum lacked preparation in many essential areas of data science. Chief among these are programming skill, knowledge of experimental statistics, and experience with math modeling.

Few would argue that programming ability is not a key skill of data science. As Drew Conway has argued, a data scientist need not have a degree in computer science, but "Being able to manipulate text files at the command-line, understanding vectorized operations, thinking algorithmically; these are the hacking skills that make for a successful data hacker." Many of my undergrad peers, having briefly seen C++ freshman year and occasionally used Mathematica to solve ODEs for homework assignments, would have been unaware that manipulation of a file from the command-line was even possibile, much less have been able to write a simple sed script; there was little difference with my grad school classmates.

Many data science positions require even more than the ability to solve problems with code. As Trey Causey has recently explained, many positions require understanding of software engineering skills and tools such as writing reusable code, using version control, software testing, and logging. Though I gained a fair bit of programming skill in college, these skills, now essential in my daily work, remained foreign to me until years later.

My math training had a lack of statistics courses. Though my brief exposure to mathematical statistics has been valuable in picking up machine learning, experimental statistics was missing altogether. Many data science teams are interested in questions of causal inference and design and analysis of experiments; some would make these essential skills for a data scientist. I learned nothing about these topics in math departments. Moreover, machine learning, also a cornerstone of data science, is not a subject I could have even defined until after I was finished with my math coursework; at the end of college, I would have said artificial intelligence was mostly about rule-based systems in Lisp and Prolog.

Yet even if statistics had play a more prominent role in my coursework, those who have studied statistics know there is often a gulf between understanding textbook statistics and being able to effectively apply statistical models and methods to real world problems. This is only an aspect of a bigger issue: mathematical (including statistical) modeling is an extraordinarily challenging problem, but instruction on effectively model real world problems is absent from many math programs. To this day, defining my problem in mathematical terms one of the hardest problems I face; I am certain that I am not alone on this. Though I am now armed with a wide variety of mathematical models, it is rarely clear exactly which model can or should be applied in a given situation.

I suspect that many people, even technical people, are uncertain as to what academic math is beyond undergraduate calculus. Mathematicians mostly work in the logical manipulation of abstractly defined structures. These structures rarely bear any necessary relationship to physical entities or data sets outside the abstractly defined domain of discourse. Though some might argue I am speaking only of "pure" mathematics, this is often true of what is formally known as "applied mathematics". John D. Cook has made similar observations about the limitations of pure and applied math (as proper disciplines) in dubbing himself a "very applied mathematician". Very applied mathematics is "an interest in the grubby work required to see the math actually used and a willingness to carry it out. This involves not just math but also computing, consulting, managing, marketing, etc." These skills are conspicuously absent from most math curricula I am familiar with.

Given this description of how my schooling left me woefully unprepared for a career in data science, one might ask how I have had two jobs with that title. I can think of several (though probably not all) reasons.

First, the academic study of mathematics provides much of the theoretical underpinnings of data science. Mathematics underlies the study of machine learning, statistics, optimization, data structures, analysis of algorithms, computer architecture, and other important aspects of data science. Knowledge of mathematics (potentially) allows the learner to more quickly grasp each of these fields. For example, learning how principle component analysis—a math model that can be applied and interpreted by someone without formal mathematical training—works will be significantly easier for someone with earlier exposure linear algebra. On a meta-level, training in mathematics forces students to think carefully and solve hard problems; these skills are valuable in many fields, including data science.

My second reason is connect to the first: I unwittingly took a number of courses that later played important roles in my data science toolkit. For example, my current work in Bayesian inference has been made possible by my knowledge of linear algebra, numerical analysis, stochastic processes, measure theory, and mathematical statistics.

Third, I did a minor in computer science as an undergraduate. That provided a solid foundation for me when I decided to get serious about building programming skill in 2010. Though my academic exposure to computer science lacked any software engineer skills, I left college with a solid grasp of basic data structures, analysis of algorithms, complexity theory, and a handful of programming languages.

Fourth, I did a masters degree in operations research (after my year as a math PhD student convinced me pure math wasn't for me). This provided me with experience in math modeling, a broad knowledge of mathematical optimization (central to machine learning), and the opportunity to take graduate-level machine learning classes.1

Fifth, my insatiable curiosity in computers and problem solving has played a key role in my career success. Eager to learn something about computer programming, I taught myself PHP and SQL as a high school student (to make Tolkien fan sites, incidentally). Having been given small Mathematica-based homework assignments in freshman differential equations, I bought and read a book on programming Mathematica. Throughout college and grad school, I often tried—and sometimes succeeded—to write programs to solve homework problems that professors expected to be solved by hand. This curiosity has proven valuable time and time again as I've been required to learn new skills and solve technical problems of all varieties. I'm comfortable jumping in to solve a new problem at work, because I've been doing that on my own time for fifteen years.

Sixth, I have been been fortunate enough to have employers who have patiently taught me and given me the freedom to learn on my own. I have learned an enormous amount in my two and a half year professional career, and I don't anticipate slowing down any time soon. As Mat Kelcey has said: always be sure you're not the smartest one in the room. I am very thankful for three jobs where I've been surrounded by smart people who have taught me a lot, and for supervisors who trust me enough to let me learn on my own.

Finally,4 it would be hard for me to overvalue the four and a half years of participation in the data science community on Twitter. Through Twitter, I have the ear of some of data science's brightest minds (most of whom I've never met in person), and I've built a peer network that has helped me find my current and last job. However, I mostly want to emphasize the pedagogical value of Twitter. Every day, I'm updated on the release of new software tools for data science, the best new blog posts for our field, and the musings of of some of my data science heros. Of course, I don't read every blog post or learn every software tool. But Twitter helps me to recognize which posts are most worth my time, and because of Twitter, I know something instead of nothing about Theano, Scalding, and dplyr.2

I don't know to what extent my experience generalizes3, in either the limitations of my education or my analysis of my success, but I am obviously not going to let that stop me from drawing some general conclusions.

For those hiring data scientists, recognize that mathematics as taught might not be the same mathematics you need from your team. Plenty of people with PhDs in mathematics would be unable to define linear regression or bloom filters. At the same time, recognize that math majors are taught to think well and solve hard problems; these skills shouldn't be undervalued. Math majors are also experienced in reading and learning math! They may be able to read academic papers and understand difficult (even if new) mathematical more quickly than a computer scientist or social scientist. Given enough practice and training, they would probably be excellent programmers.

For those studying math, recognize that the field you love, in its formal sense, may be keeping you away from enjoyable and lucrative careers. Most of your math professors have spent their adult lives solving math problems on paper or on a chalkboard. They are inexperienced and, possibly, unknowledgeable about very applied mathematics. A successful career in pure mathematics will be very hard and will require you to be very good. While there seem to be lots of jobs in teaching, they will rarely pay well. If you're still an student, you have a great opportunity to take control of your career path. Consider taking computer science classes (e.g. data structures, algorithms, software engeering, machine learning) and statistics classes (e.g. experimental design, data analysis, data mining). For both students and graduates, recognize your math knowledge becomes very marketable when combined skills such as programming and machine learning; there are a wealth of good books, MOOCs, and blog posts that can help you learn these things. More over, the barrier to entry for getting started with production quality tools has never been lower. Don't let your coursework be the extent of your education. There is so much more to learn!5


  1. At the same time, my academic training in operations research failed me, in some aspects, for a successful career in operations research. For example, practical math modeling was not sufficiently emphasized and the skills of computer programming and software development were undervalued. 

  2. I have successfully answered more than one interview question by regurgitating knowledge gleaned from tweets. 

  3. Among other reasons, I didn't really plan to get where I am today. I changed majors no fewer than three times in college (physics, CS, and math) and essentially dropped out of two PhD programs! 

  4. Of course, I have plenty of data science skills left to learn. My knowledge of experimental design is still pretty fuzzy. I still struggle with effective mathematical modeling. I haven't deployed a large scale machine learning system to production. I suck at software logging. I have no idea how deep learning works. 

  5. For example, install Anaconda and start playing with some of these IPython notebooks

Tim HopperPublishing a Static Site Generator from iOS

A few weeks ago, I setup Travis CI so this Pelican-based blog will publish itself when I commit a new post to Github.

At the time, I asked on Twitter if there were any good Git clients that would allow me to push new posts from my iPad; I didn't get any promising replies.

However, I just found out about an app called Working Copy "a powerful Git client for iOS 8 that clones, edits, commits, pushes, and more."

I just cloned my Stigler Diet repo on my iPad, and I'm composing this post from the Whole Foods cafe on my iPad. If you're reading this post, it's because I successfully published it from here!

Astro Code SchoolVideo - Tips for Using Generators in Python

Here's the third screencast video in Caleb Smith's series about functional programming in Python. This one describes generators, iterators and iterables in Python with some tips on how to implement generators.

Don't forget to subscribe to the Astro Code School YouTube channel. Lots more educational screencasts to come.

Caktus GroupBeyond PEP 8 by Raymond Hettinger (PyCon 2015 Must-See Talk: 2/6)

Part two of six in our PyCon 2015 Must-See Series, a weekly highlight of talks our staff enjoyed at PyCon.

I think everyone who codes in any language and uses any automated PEP-8 or linter sort of code checker should watch this talk. Unfortunately to go into any detail on what I learned (or really was reminded of) would ruin the effect of actually watching the talk. I'd encourage everyone to watch it. I came away from the talk wanting to figure out a way to incorporate its lesson into our Caktus development practices.

Frank WierzbickiJython 2.7.0 final released!

On behalf of the Jython development team, I'm pleased to announce that the final release of Jython 2.7.0 is available! It's been a long road to get to 2.7, and it's finally here! I'd like to thank Amobee for sponsoring my work on Jython. I'd also like to thank the many contributors to Jython, including - but not limited to - bug reports, patches, pull requests, documentation changes, support emails, and fantastic conversation on Freenode at #jython.

Along with language and runtime compatibility with CPython 2.7.0, Jython 2.7 provides substantial
support of the Python ecosystem. This includes built-in support of pip/setuptools (you can use with bin/pip) and a native launcher for Windows (bin/jython.exe), with the implication that you can finally install Jython scripts on Windows.

Jim Baker presented a talk at PyCon 2015 about Jython 2.7, including demos of new features.

Please see the NEWS file for detailed release notes. This release of Jython requires JDK 7 or above.

This release is being hosted at maven central. There are three main distributions. In order of popularity:
To see all of the files available including checksums, go here and navigate to the appropriate distribution and version.

Astro Code SchoolVideo - Implementing Decorators in Python

This screencast provides some insights into implementing decorators in Python using functional programming concepts and demonstrates some instances where decorators can be useful.

In the video, I reference the blog post Python Decorators in 12 Steps by Simeon Franklin for further reading.

Caktus GroupCaktus Wins Two Communicator Awards for PyCon 2015

We’re thrilled to announce that we’ve won two Communicator Awards in this year’s 2015 Communicator Awards competition. With over 6000 entries received from across the US and around the world, the Communicator Awards is considered the largest and most competitive international awards program honoring creative excellence for communications professionals.

Caktus Group was honored with the Gold Award for Excellence in Event Website and Silver Award for Distinction for Visual Appeal for the PyCon 2015 site. Both awards recognize the work of designer Trevor Ray, developers David Ray and Rebecca Muraya, and project manager Ben Riseling.

Of course, we’re excited for our work to be recognized, but these awards also represent an opportunity for PyCon to receive well-deserved recognition, especially for the hard work of the event’s organizers. With the 2015 Communicator Awards, they have been placed in the company of such large brands as the Canadian Olympic Team, Frito-Lay, Lexus, and Red Hat.

You can learn more about the origins of the site’s design and the design process for Trevor’s graphic design by listening to his lightning talk “Reimagining PyCon 2015”.

Caktus GroupAIGA Durham Studio Tour Recap

This was the first year Caktus Group participated in the AIGA studio tour and the turnout was amazing. From 5:30 PM till the 9:00 PM close, we had visitors ranging from students to tenured professionals in the design and web development fields sharing stories and touring the newly renovated Caktus Group office. Members from the Caktus design, development, and management teams were present to field questions, give tours, and show select works from the past year.

From the Epic Allies team, visitors got to see a preview of the app’s mini games and designs. Epic Allies is an app that seeks to gamify the process of taking HIV medication. The goal is to help HIV-positive individuals develop and maintain positive habits around taking their medication and making other healthy life choices. The Epic Allies project has been in progress since 2012 and it’s been great to see it evolve.

Visitors were also able to view and explore the 2015 PyCon website. The design and development of the website were completed by Caktus Group in early 2015. Elements of the design were then used throughout the PyCon conference venue in Montreal. The bright winding forms of the design worked well on screen, but they really enveloped the venue and tied everything together. It was a fantastic project made possible by the hard work of many Caktus staff and the conference organizers Ewa Jodlowska and Diana Clarke, who were great to work with.

Finally, there was a behind-the-scenes video of the Caktus Group reception sign installation and the original install template. The video was shot and edited by Caktus’ Wray Bowling and showed the start to finish process of installing the reception sign that was beautifully crafted by Jim at ArtCraft Sign Company - Thanks, Jim. Having missed the actual installation of the sign, I’m glad Wray captured the process.

By the time 9 PM rolled around, a lot of work was viewed, beers were drunk, and information was shared with new friends. If you didn’t make it out for this year’s AIGA studio tour, don’t be sad. You can still make it out next year. There are a lot of talented people in the Triangle and with so many open studio doors you’re bound to run into more than a few of them.

Caktus GroupMarketplace Radio Highlights How Service Info App Helps 1.5 Million Syrian Refugees

Image Courtesy of UK Department for International Development [CC BY 2.0], via Wikimedia Commons

Recently, one of our projects, Service Info, received national attention thanks to a Marketplace interview. American Public Media’s Kai Ryssdal spoke with International Rescue Committee CEO David Miliband about how Service Info is helping 1.5 million refugees of the Syrian conflict in Lebanon. The Syrian conflict is one of the worst ongoing humanitarian crises, accounting for the majority of the world’s refugees.

“We don’t just need to do more in the Syria crisis, but we’ve got to do things differently,” said Miliband. “The refugees from Syria are educated people, they’re tech savvy people.”

Enter Service Info, a platform developed by Caktus in conjunction with the IRC and the United States government to provide a mobile means for refugees to report on, rate, and find the services available to them. Thus far, displaced persons have been one among millions, adrift without the means to inform themselves or take action in their own self-care. The Service Info platform acts as a reliable source of information, informing individuals as to where they can cash in various vouchers for goods and aid services for instance, or where their children can attend school. More significantly, the platform enables users to comment on these services. Such feedback will in turn improve the quality of service.

“Until now, there’s been no proper tech platform for [refugees] to find out what services are available to them,” said Miliband.

Service Info is revolutionary in providing just such a platform. Once the system has been in use on the ground for a certain length of time, Caktus and the IRC hope to increase the reach of Service Info by open sourcing the app. Making the source code freely available enables others to use, improve upon, and replicate the platform. Agencies working in conflict zones and natural disasters would be able to use it to support displaced persons.

Listen to the complete interview to learn more about the excellent work being done by the International Rescue Committee in supporting the world’s most challenging crises.

Caktus GroupPyCon 2015 Talks: Our Must See Picks (1/6)

Whether you couldn’t make it to PyCon this year, were busy attending one of the other amazing talks, or were simply too enthralled by the always popular “hallway track”, there are bound to be talks you missed out on. Thankfully, the PyCon staff does an amazing job not only organizing the conference for the attendees and the days of the conference, but also by producing recordings of all the talks for anyone who couldn’t attend. Even if you attended, you couldn’t have seen every talk, so these recordings are a great safety net.

Because there are so many of them, I asked those who attended for suggestions. We will share our six favorites, one a week, for the next few weeks. Take some time to watch and learn from these talented speakers from Caktus staff who can’t stop talking about the great time they had in Montreal.

Keynote by Jacob Kaplan-Moss

Suggested by Technical Director Mark Lavin

"Jacob's keynote on Sunday was amazing. He really breaks down the myth of the 10x programmer and why it hurts the tech community. Everyone should watch it. I came away from this talk thinking about how we could improve our hiring and review process to ensure we aren't falling in the traps set by this myth. He's an amazing speaker and leader for our community."

Caktus GroupWhy did Caktus Group start Astro Code School?

Our Astro Code School is now officially accepting applications to its twelve-week Python & Django Web Development class for intermediate programmers! To kick off Astro’s opening, we asked Caktus’ CTO and co-founder Colin Copeland, who recently won a 2015 Triangle Business Journal 40 Under 40 Leadership Award, and Astro’s Director Brian Russell to reflect on the development of Astro as well as the role they see the school playing in the Django community.


Why open the East Coast’s first Django and Python-focused code school?

Colin: Technology is an important part of economic growth in the Triangle area and we wanted to make sure those opportunities reached as many residents as possible. We saw that there were no East Coast formal adult training programs for Django or Python, our specialities. We have experience in this area, having hosted successful Django boot camps and private corporate trainings. Opening a code school was a way to consolidate the training side of Caktus’ business while also giving back to the Triangle-area community by creating a training center to help those looking to learn new skills.

Brian: Ultimately, Caktus noticed a need for developers and the lack of a central place to train them. The web framework Django is written in Python and Python is a great language for beginning coders. Python is the top learning language for the nation’s best universities.Those are the skills prominent here at Caktus. It was an opportunity to train more people and prepare them for the growing technology industry at firms like Caktus.

How has demand for Django-based web applications changed since Caktus first began?

Colin: It has increased significantly. We only do Django development now, we weren’t specialized in that way when we first started. The sheer number of inbound sales requests is much higher than before. More people are aware of Django, conferences are bigger. Most significantly, it has an ever-growing reputation as a more professional, stable, and maintainable framework than other languages.

How does Astro, then, fit into this growth timeline?

Colin: It’s a pretty simple supply and demand ratio. Astro comes out of a desire to add more developers to the field and meet a growing demand for Django coders. The Bureau of Labor Statistics projects a 20% growth in demand for web developers by 2020. It is not practical to wait for today’s college, high school, or even middle-school students to become developers. Many great software developers are adults coming from second or third careers. Our staff certainly reflects this truth. Astro provides one means for talented adults to move into the growing technology industry.

Where do you see Astro fitting in to the local Python and Django community? For instance, how do you envision Astro’s relationship to some of the groups Caktus maintains a strong relationship with, such as Girl Develop It or TriPython?

Colin: Astro’s goals clearly align with those of Girl Develop It in terms of training and support. And the space will be a great place to host events for local groups and classes.

Brian: Yeah, I see it as a very natural fit. We hope to help those organizations by sponsoring meetups, hosting events, and providing free community programs and workshops. And there is the obvious hope that folks from those groups will enroll as students at Astro. I think it’s also important to note that Chris Calloway, one of the lead organizers for TriPython, is a member of the Astro advisory committee. There is a natural friendship with that community.

How do you hope Astro will change and add to Durham’s existing technical community?

Brian: In general there are a lot students with training from Astro who will be able to bring their skills to local businesses, schools, non-profits—all sorts of organizations. For me, computer programming is like reading, writing, and arithmetic: it should be a part of core curriculum for students these days. It helps people improve their own net worth and contribute to the local economy. Astro is all about workforce development and improving technical literacy: two things that help entrepreneurs and entrepreneurial enterprises.

What are some of the main goals for Astro in its first year?

Brian: I want to help people find better, higher paying jobs by obtaining skills that are usable in the current economy through our 12-week classes. I’m personally interested in social economic justice and one way to achieve that is by being highly skilled. Training helps people better themselves no matter what kind of education it is. In the 21st century, computer programming education is one of the most powerful tools for job preparedness and improvement.

Colin: I would love to follow alumni who make it through the classes and see how their skills help them in their careers.

A huge amount of work has gone into getting Astro licensed with the North Carolina Community College Board. A lot of code schools are not licensed. Why was this an important step for Astro?

Brian: Mainly because we wanted to demonstrate to potential students and the public at large that we’ve done our due diligence, that other groups and professionals have vetted us and qualified us as prepared to serve. Ultimately we are licensed in order to protect consumers. Not just licensed—we’re bonded, licensed, and insured. And this is an ongoing guarantee to our students. We will be audited annually for six years. I see it as a promise for continuous and ongoing protection, betterment, and improvement.

So, who would you describe as the ideal student for an Astro course?

Brian: A lot of students. Any. All different kinds. But, more specifically? I would recommend it to folks changing their career. Or people who graduated from high school, but for one reason or another are not able to go onto higher education. Astro classes will be excellent for job preparedness and training so anyone looking to market themselves in the current economy.

Additionally, anyone fine tuning their career after college or even after grad school. Coding and learning to code is an excellent way to earn money to pay for school without getting into debt. Astro is in no way a replacement for higher ed, but coding classes can augment a well-rounded education. Successful people have a diverse education. And learning to code enables people to align their toolkits for the modern job market.


To learn more about Astro, meet Colin and Brian in person, and celebrate the opening of Astro Code School, be sure to stop by the school’s Launch Party on Friday, May 1st from 6:00 to 9:00 pm. Registration is required.

Astro Code SchoolVideo - Functional Programming in Python

In this video our Lead Instructor Caleb Smith presents basic functional programming concepts and how to apply them in Python. Check back later for more screencasts here and on the new Astro YouTube channel.

Astro Code SchoolIntro to Django by PyLadies RDU

PyLadies RDU will be offering a free four hour workshop on Django here at Astro! It'll be taught by Caktus Django developer Rebecca Conley. They'll conduct it here at Astro Code School on Saturday May 30, 2015 from 4pm to 8pm. For more information and to RSVP please join the Pyladies RDU meetup group.

Caktus GroupQ1 2015 Charitable Giving

Though our projects often have us addressing issues around the globe, we like to turn our focus to the local level once a quarter with our charitable giving program. Each quarter we ask our employees to suggest charities and organizations that they are involved in or have had a substantive influence on their lives. It’s our way of supporting not only our own employees, but the wider community in which we live and work. This quarter we are pleased to be sending contributions to the following organizations:

The Scrap Exchange

http://scrapexchange.org
The Scrap Exchange is a nonprofit creative reuse center in Durham, North Carolina whose mission is to promote creativity and environmental awareness. The Scrap Exchange provides a sustainable supply of high-quality, low-cost materials for artists, educators, parents, and other creative people. This is the second time staff nominated this organization.

Durham County Library

http://durhamcountylibrary.org/
The Durham County Library provides extensive library services, including book, DVD, audiobook, and A/V equipment rentals. They also provide computer services, internet access, meeting and study rooms on site, as well as a bookmobile and Older Adult and Shut-In Services for those unable to visit the library. Aside from the library’s service towards the community, their archives were incredibly helpful in the restoration of the building at 108 Morris St where our office is now located. Caktus is particularly thankful for the work of Lynn Richardson, Local History Librarian of the North Carolina Collection, for her invaluable help in the restoration process.

Preservation Durham

http://preservationdurham.org/
Preservation Durham’s mission is to protect Durham’s historic assets through action, advocacy, and education. They provide home tours, walking tours, and virtual tours of Durham. They also advocate for historic places in peril and provide informative workshops for those interested in preserving and restoring historical sites. Their workshops were vital in the restoration of our historic office building in downtown Durham.

Durham Bike Co-Op

http://www.durhambikecoop.org/
The Durham Bike Co-op is an all-volunteer, nonprofit, community bike project whose programming includes hands-on repair skill share, the earn-a-bike program, various mobile bike clinics, and community ride events. They help people build, repair, maintain and learn about bicycles and bicycle commuting. Their community-oriented vision and shared labor practices are definitively Durham.

Diaper Bank of North Carolina

http://ncdiaperbank.org/
Safety net programs such as food stamps and WIC do not cover diapers. And a healthy supply of diapers can fall out of the financial reach of many using these programs. The Diaper Bank of North Carolina provides diapers to families in need. The organization makes it easy to get involved—in fact, Caktus leadership volunteered not too long ago—and it addresses a critical need in the fight against poverty in the Triangle.

Frank WierzbickiJython 2.7 release candidate 3 available!

On behalf of the Jython development team, I'm pleased to announce that the third release candidate of Jython 2.7 is available! I'd like to thank Amobee for sponsoring my work on Jython. I'd also like to thank the many contributors to Jython.

Please see the NEWS file for detailed release notes. This release of Jython requires JDK 7 or above.

This release is being hosted at maven central. There are three main distributions. In order of popularity:
To see all of the files available including checksums, go here and navigate to the appropriate distribution and version.

Caktus GroupCaktus Group's Colin Copeland Recognized Among TBJ’s 40 Under 40

Caktus co-founder and Chief Technology Officer, Colin Copeland, is among an outstanding group of top business leaders to receive the Triangle Business Journal’s 2015 40 Under 40 Leadership Award. The award recognizes individuals for their remarkable contributions to their organizations and to the community.

Colin was one of the co-founders of Caktus, started in 2007 around a second-hand Chapel Hill dining room table. Now, Caktus is the nation’s largest custom web and mobile software firm specializing in Django, an open source web framework. Caktus has built over 100 solutions that have reached more than 4 million lives. Clutch.io, a research firm, lists Caktus as one of the nation’s top web development firms. As a direct result of Colin’s guidance and vision, Caktus has built technology that not only helps business clients, but has addressed some of the most difficult global challenges facing us today: humanitarian aid for war refugees, HIV/AIDS, and open access to democracy, among others.

Colin also served as UNICEF’s community coordinator for RapidSMS, a platform to build technology for developing nations quickly and freely. He used his experience as part of the Django open source community to lay the foundations of a global network of developers working towards improving the world. RapidSMS projects, featured on the BBC, Time Magazine, Fast Company, and others, have reached untold millions in the effort to improve daily lives.

Colin, a Durham resident, is passionate about improving his local community. He used his community-building skills and keen technical expertise to found Code for Durham, a volunteer group dedicated to improving civic engagement by building free technology tools. The group includes software developers, designers, civic activists, policy experts, and government employees. Colin, along with key Code for Durham members, successfully lobbied for increased Durham government transparency via a new Open Data Manager position. The group is working on web applications to help with school navigation, homelessness, bike crash locations, and more.

In keeping with the spirit of supporting his local Durham community, Colin led the historic restoration of Caktus’ new headquarters in downtown Durham. He ensured renovations included a community meeting space that could support local technology groups such as TriPython, Girl Develop It RDU, and PyLadies RDU. He is also a member of Durham’s Rotary Club.

A strong advocate for the power of technology to change lives, Colin led the founding of Caktus’ Astro Code School. Astro provides full-time software development education for adults in an inclusive environment, and will increase access to the Triangle’s growing technology industry.

Colin will be honored at the 40 Under 40 Leadership Awards Gala on June 11th at the Cary Prestonwood Country Club. The Triangle Business Journal will also profile him in a special section of their June 12th print edition.

Caktus GroupPyCon 2015 ReCap

The best part of PyCon? Definitely the people. This is my fifth PyCon, so I’ve had a chance to see the event evolve, especially with the fantastic leadership of Ewa Jodlowska and Diana Clarke. We were also lucky enough to work with them on the PyCon 2015 website. This year we were once again located in the Centre-Ville section of Montreal, close to lots of great restaurants and entertainment.

Mark Lavin, David Ray, and Caleb Smith arrived before the official start of the conference to host a workshop on “Building SMS Applications with Django.” As avid users of RapidSMS for many of our of projects, including UNICEF’s Project Mwana and the world’s first SMS voter registration app for Libya, it was a great experience to share our knowledge.

We also had a chance to work with future Django developers through the DjangoGirls Workshop this year. Karen Tracey, David Ray, and Mark Lavin served as mentors to help the mentees build their first Django app. It was wonderful to watch new programmers develop their first apps and we are looking forward to participating in similar events in the future.

The conference kicked off Thursday night with a reception where we debuted a game we built during one of our ShipIt Days. Our Caktus-designed “Ultimate Tic Tac Toe” was a huge hit!

Also on Thursday, the O'Reilly booth held a book signing for Mark Lavin’s Lightweight Django that he coauthored with Julia Elman. An impressively long line of people showed up for the event. Luckily, Mark’s around the office enough that we can get him to sign all sorts of books for us.

Look at all those people!

Friday and Saturday the trade booth show was in full swing. At the Caktus booth, people continued to line up to play “Ultimate Tic Tac Toe” and we gave away five copies of Mark’s book, Lightweight Django, as well as three quadcopters. We were sad to see the quadcopters leave the office but hope that the new recipients enjoy playing with them as much as we did.

We also had some visits from our PyCon 2015 ticket giveaway winners. We gave tickets to the Python community at large and to our local community groups here in North Carolina, including TriPython, Girl Develop It RDU, and PyLadies RDU.

Duckling, an app we developed to make it easier to find and join casual outings at conferences, was also in full use this year at PyCon. We brought along the app’s mascot Quacktus. He even had his own Twitter handle this year to give a bird’s eye view of PyCon happenings. It was great to once again use the app to meet new people and catch up with old friends while exploring Montreal.

On the last night of PyCon, PyLadies held their charity auction and Caktus donated a framed collage of Trevor Ray’s preliminary artwork and sketches that went into his redesign of the PyCon 2015 website. We were very honored that it sold for $1,000 (the second highest bidded item, second only to Disney’s artwork) and are glad we can provide support to all of the awesome work PyLadies does for the community.

PyCon was, as always, a terrific time for us and we can’t wait until 2016. See you in Portland!

Caktus GroupNow Launching: Astro Code School for Django and Python Education

Since moving to Durham in Fall 2014, we've been busy here at Caktus. We just finished renovating the first floor of our headquarters to bring the Triangle's (and East Coast's!) first Django and Python code school, Astro Code School. We're proud to say that the school is now officially open and we'll be celebrating with a public launch party on May 1st.

I spoke with Colin Copeland, Caktus co-founder and Chief Technology Officer, about why Astro matters to Caktus and our region here in North Carolina: "The Triangle has seen an influx of people relocating here to be a part of a thriving technology sector. However, as business leaders, we have a responsibility to make sure innovation in the Triangle doesn’t leave people behind. For folks who have lived here and seen Durham and the rest of the Triangle evolve, we want to make sure they have the opportunity to be a part of the change. That starts with education, and that’s why we are opening the Astro Code School.”

The Bureau of Labor Statistics predicts a 20% growth for web developer jobs from 2012-2022. That growth is twice the projected growth of all U.S. occupations for the same period. Not only will Astro train developers to fill this job sector, but it will also focus on the Django and Python languages, of which Python is widely recognized as the leading language for the next generation of programmers.

“It’s an exciting time be in technology,” adds Colin. “It’s a field whose reach extends beyond the latest cool app. It’s clear technology will play a large part in solving some of the biggest issues of our time— humanitarian aid for war refugees, HIV/AIDS, and access to democracy, just to name a few. That’s some of the work we do at Caktus and we want to make sure Astro Code School gives future technologists the same tools to work towards making the world a better place.”

The first class in Python and Django Web Engineering will be held May 18th through August 10th of this year. Applications are now open and due May 11th.

To celebrate the opening of the school, we will be hosting a launch party on Friday, May 1st from 6:00 to 9:00 pm. Registration is required.. The event will be held in our newly renovated historic space at 108 Morris St in downtown Durham. We hope to see you there!

Josh JohnsonRaspberry Pi Build Environment In No Time At All

Leveraging PRoot and qemu, it’s easy to configure Raspberry Pi’s, build and install packages, without the need to do so on physical hardware. It’s especially nice if you have to work with many disk images at once, create specialized distributions, reset passwords, or install/customize applications that aren’t yet in the official repositories.

I’ve recently dug in to building apps and doing fun things with the Raspberry Pi. With the recent release of the Raspberry Pi 2, its an even more exciting platform. I’ve documented what I’ve been using to make my workflow more productive.

Table Of Contents

Setup

We’ll use a Linux machine. Below are setup instructions for Ubuntu and Arch. I prefer Arch for desktop and personal work, I use Debian or Ubuntu for production deployments.

Arch Linux is a great “tinkerer’s” distribution – if you haven’t used it before it’s worth checking out. It’s great on the Raspberry Pi.

Debian and Ubuntu have some differences, but share the same base and use the same package management system (apt). I’ve included instructions for Ubuntu in particular, since it’s the most similar to Raspbian, the default Raspberry Pi operating system, and folks may be more familiar with that environment.

Generally speaking, you’ll need the following things:

  • A physical computer or virtual machine running some version of Linux (setup instructions are provided for the latest Arch and Ubuntu, but any Linux should work).
  • Installation files for the Raspberry Pi.
  • SD cards suitable for whatever Raspberry Pi you have. We’ll learn how to work with raw disk images and how to copy disk images to SD cards.
  • QEMU, an emulator system, and it’s ARM processor support (the Raspberry Pi uses an ARM processor).
  • PRoot – a convenience tool that makes it easy to mount a “foreign” filesystem and run commands inside of it without booting.
  • A way to create disk images, and mount them like physical devices.

Once the packages are installed, the commands and processes for building and working with Raspberry Pi boot disks are the same.

NOTE: we assume you have sudo installed and configured.

Virtual Machine Notes

If you’re using an Apple (Mac OS X) computer or Windows, the easiest way to work with Linux systems is via virtualization. VirtualBox is available for most platforms and is easy to work with.

The virtualbox documentation can walk you through the installation of VirtualBox and creating your first virtual machine.

When working with an SD card, you’ll might want to follow instructions for “Access to entire physical hard disk” to make the card accessible to the virtual machine. As an alternative, you could use a USB SD card reader, and usb pass-thru to present not the disk to the virtual machine, but the entire USB device, and let the virtual machine deal with mounting it.

Both of these approaches can be (very) error prone, but provide the most “native” way of working.

Instead I’d recommend installing guest additions. With guest additions installed in your virtual machine, you can use the shared folders feature of VirtualBox. This makes it easy to copy disk images created in your virtual machine to your host machine, and then you can use the standard instructions for Windows and Mac OS to copy the disks images to your SD cards.

Advanced Usage Note: Personally, my usual method of operations with VirtualBox VMs is to set up Samba in my virtual machine and share a folder over a host-only network (or I’ll use bridged networking so I can connect to it from any machine on my LAN) – I’d consider this a more “advanced” approach but I’ve had more consistent results for day-to-day work than using guest additions or mounting host disks. However, for the simple task of just copying disk images back and forth to the virtual machine, the shared folders feature should suffice. 

Arch Linux

We’ll use pacman and wget to procure and install most of the tools we need:

$ sudo pacman -S dosfstools wget qemu unzip pv
$ wget http://static.proot.me/proot-x86_64
$ chmod +x proot-x86_64
$ sudo mv proot-x86_64 /usr/local/bin/proot

First, we install the following packages:

dosfstools
Gives us the ability to create FAT filesystems, required for making a disk bootable on the RaspberryPi.
wget
General purpose file grabber – used for downloading installation files and PRoot
qemu
QEMU emulator – allows us to run RaspberryPi executables
unzip
Decompresses ZIP archives.
pv
Pipeline middleware that shows a progress bar (we’ll be using it to make copying disk images with dd a little easier for the impatient)

Then we download PRoot, make the file executable, and copy it to a common location for global executable that everyone on a machine can access, /usr/local/bin. This location is just a suggestion – to follow along with the examples in this article, you just need to put the proot executable somewhere on your $PATH.

Finally, we’ll use an AUR package to obtain the kpartx tool.

kpartx wraps a handful of tasks required for creating loopback devices into a single action.

If you haven’t used the AUR before, check out the documentation first for an overview of the process, and to install prerequisites.

$ wget https://aur.archlinux.org/packages/mu/multipath-tools/multipath-tools.tar.gz
$ tar -zxvf multipath-tools.tar.gz
$ cd multipath-tools
$ makepkg
$ sudo pacman -U sudo pacman -U multipath-tools-*.pkg.tar.xz

Ubuntu

Ubuntu Desktop comes with most of the tools we need (in particular, wget, the ability to mount dos file systems, and unzip). As such, the process of getting set up for using PRoot is a bit simpler, compared to Arch.

Ubuntu uses apt-get for package installation.

$ sudo apt-get install qemu kpartx pv
$ wget http://static.proot.me/proot-x86_64
$ chmod +x proot-x86_64
$ sudo mv proot-x86_64 /usr/local/bin/proot

First, we install the following packages:

qemu
QEMU emulator – allows us to run RaspberryPi executables
kpartx
Helper tool that wraps a handful of tasks required for creating loopback devices into a single action.
pv
Pipeline middleware that shows a progress bar (we’ll be using it to make copying disk images with dd a little easier for the impatient)

Then, we install PRoot by downloading the binary from proot.me, making it executable, and putting it somewhere on our $PATH, /usr/local/bin, making it available to all users on the system. This location is merely a suggestion, but putting the proot executable somewhere on your $PATH will make it easier to follow along with the examples below.

Working With A Disk Image

A disk (in the Raspberry Pi’s case, we’re talking about an SD card) is just an arrangement of blocks for data storage. On top of those blocks is a description of how files are represented in those blocks, or a filesystem (for more detail, see the Wikipedia articles on Disk Storage and File System).

Disks can exist in the physical world, or can be represented by a special file, called a disk image. We can download pre-made images with Raspbian already installed from the official Raspberry Pi downloads page.

$ wget http://downloads.raspberrypi.org/raspbian_latest -O rasbian_latest.img.zip
$ unzip rasbian_latest.img.zip
Archive:  raspbian_latest.zip
  inflating: 2015-02-16-raspbian-wheezy.img

Take note of the name of the img file – it will vary depending on the current release of Raspbian at the time.

At this point we have a disk image we can mount by creating a loopback device. Once we have it mounted, we can use QEMU and PRoot to run commands within it without fully booting it.

We’ll use kpartx to set up a loopback device for each partition in the disk image:

$ sudo kpartx -a -v 2015-02-16-raspbian-wheezy.img 
add map loop0p1 (254:0): 0 114688 linear /dev/loop0 8192
add map loop0p2 (254:1): 0 6277120 linear /dev/loop0 122880

The -a command line switch tells kpartx to create new loopback devices. The -v switch asks kpartx to be more verbose and print out what it’s doing.

We can do a dry-run and inspect the disk image using the -l switch:

$ sudo kpartx -l 2015-02-16-raspbian-wheezy.img
loop0p1 : 0 114688 /dev/loop0 8192
loop0p2 : 0 6277120 /dev/loop0 122880
loop deleted : /dev/loop0

We can see the partitions to be sure, using fdisk -l

$ sudo fdisk -l /dev/loop0

Disk /dev/loop0: 3.1 GiB, 3276800000 bytes, 6400000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009bf4f

Device       Boot  Start     End Sectors Size Id Type
/dev/loop0p1        8192  122879  114688  56M  c W95 FAT32 (LBA)
/dev/loop0p2      122880 6399999 6277120   3G 83 Linux

We can also see them using lsblk:

$ lsblk
NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda         8:0    0 14.9G  0 disk 
└─sda1      8:1    0 14.9G  0 part /
sdc         8:32   0 29.8G  0 disk 
└─sdc1      8:33   0 29.8G  0 part /run/media/jj/STEALTH
loop0       7:0    0  3.1G  0 loop 
├─loop0p1 254:0    0   56M  0 part 
└─loop0p2 254:1    0    3G  0 part 

Generally speaking, the first, smaller partition will be the boot partition, and the others will hold data. It’s typical with RaspberryPi distributions to use a simple 2-partition scheme like this.

The new partitions will end up in /dev/mapper:

$ ls /dev/mapper
control  loop0p1  loop0p2

Now we can mount our partitions. We’ll first make a couple of descriptive directories for mount points:

$ mkdir raspbian-boot raspbian-root
$ sudo mount /dev/mapper/loop0p1 raspbian-boot
$ sudo mount /dev/mapper/loop0p2 raspbian-root

At this point we can go to the next section where we will run PRoot and start doing things “inside” the disk image.

Working With An Existing Disk

We can use PRoot with an existing disk (SD card) as well. The first step is to insert the disk into your computer. Your operating system will likely automatically boot it. We also need to find out which device the disk is registered as.

lsblk can answer both questions for us:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdb      8:16   1 14.9G  0 disk 
├─sdb1   8:17   1   56M  0 part /run/media/jj/boot
└─sdb2   8:18   1    3G  0 part /run/media/jj/f24a4949-f4b2-4cad-a780-a138695079ec
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

On my system, the SD card I inserted (a Raspbian disk I pulled out of a Raspberry Pi) came up as /dev/sdb. It has two paritions, sdb1 and sdb2. Both partitions were automatically mounted, to /run/media/jj/boot and /run/media/jj/f24a4949-f4b2-4cad-a780-a138695079ec, respectively.

Typically, the first, smaller partition will be the boot partition. To verify this, we’ll again use fdisk -l:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009bf4f

Device     Boot  Start     End Sectors Size Id Type
/dev/sdb1         8192  122879  114688  56M  c W95 FAT32 (LBA)
/dev/sdb2       122880 6399999 6277120   3G 83 Linux

Here we see that /dev/sdb1 is 56 megabytes in size, and is of type “W95 FAT32 (LBA)”. This is typically indicative of a RasbperryPi boot partition, so /dev/sdb1 is our boot partition, and /dev/sdb2 is our root partition.

We can use the existing mounts that the operating system set up automatically for us, if we want, but it’s a bit easier to un-mount the partitions and mount them somewhere more descriptive, like raspbian-boot and raspbian-boot:

$ sudo umount /dev/sdb1 /dev/sdb2
$ mkdir -p raspbian-boot raspbian-root
$ sudo mount /dev/sdb1 raspbian-boot
$ sudo mount /dev/sdb2 raspbian-root

Note: The -p switch to mkdir causes mkdir to ignore already-exsiting directories. We’ve added it here in case you were following along in the previous section and already have these directories handy.

A call to lsblk will confirm that we’ve mounted things as we expected:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdb      8:16   1 14.9G  0 disk 
├─sdb1   8:17   1   56M  0 part /run/media/jj/STEALTH/raspbian-boot
└─sdb2   8:18   1    3G  0 part /run/media/jj/STEALTH/raspbian-root
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

Now we can proceed to the next section, and run the same PRoot command to configure, compile and/or install things – but this time we’ll be working directly on the SD card instead of inside of a disk image.

Basic Configuration/Package Installation

Now that we’ve got either a disk image or a physical disk mounted, we can run commands within those filesystems using PRoot.

NOTE: The following command line switches worked for me, but took some experimentation to figure out. Please take some time to read the PRoot documentation so you understand exactly what the switches mean.

We can run any command directly (like say, apt-get) but it’s useful to be able to “log in” to the disk image (run a shell), and then perform our tasks:

$ sudo proot -q qemu-arm -S raspbian-root -b raspbian-boot:/boot /bin/bash

This mode of PRoot forces the root user inside of the disk image. The -q switch wraps every command in the qemu-arm emulator program, making it possible to run code compiled for the RaspberryPi’s ARM processor. The -S parameter sets the directory that will be the “root” – essentially that means that raspbian-root will map to /. -S also fakes the root user (id 0), and adds some protections for us in the event we’ve mixed in files from our host system that we don’t want the disk image code to modify. -b splices in additional directories – we add the /boot partition, since that’s where new kernel images and other boot-related stuff gets installed. This isn’t entirely necessary, but its useful for system upgrades and making changes to boot settings. Finally, we tell PRoot which command to run, in this case, /bin/bash, the BASH shell.

Now that we’re “in” the disk image, we can update and install new packages.

Since root is not a “normal” user in the default Rasbian installation, the path needs to be adjusted:

# export PATH=$PATH:/usr/sbin:/sbin:/bin:/usr/local/sbin

Now we can do the update/upgrade, and install any additional packages we might want (for example, the samba file sharing server):

# apt-get update
# apt-get upgrade
# apt-get install samba

Check out the man page for apt-get for full details (type man apt-get at a shell prompt).

You will likely see a lot of warnings and possibly errors when installing packages – these can usually be ignored, but make note of them – there may be some environmental tweaks that need to be made.

We can do almost anything in the PRoot environment that we could do logged into a running Raspberry Pi.

We can edit config.txt and change settings (for an explanation of the settings, see the documentation):

# vi /boot/config.txt

We can add a new user:

# adduser jj
Adding user `jj' ...
Adding new group `jj' (1004) ...
Adding new user `jj' (1001) with group `jj' ...
Creating home directory `/home/jj' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for jj
Enter the new value, or press ENTER for the default
	Full Name []: Josh Johnson
	Room Number []: 
	Work Phone []: 
	Home Phone []: 
	Other []: 

We can grant a user sudo privileges (the default sudo configuration allows anyone in the sudo group to run commands as root via sudo):

# usermod -a -G sudo jj
# groups jj
jj : jj sudo

You can reset someone’s password, or change the password of the default pi user:

# passwd pi
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully

The possibilities here are endless, with a few exceptions:

  • Running code that relies on the GPIO pins or drivers loaded into the kernel will not work.
  • Configuring devices (like, say, a wifi adapter) may work, but device information will likely be wrong.
  • Testing startup/shutdown scripts – since we’re not booting the disk image, these scripts aren’t run.

Compiling For The RPi

Raspbian comes with most of the tools we’ll need (in particular, the build-essential package). Lets build and install the nginx web server – a relatively easy to build package.

If you’ve never compiled software on Linux before, most (but not all!) source code packages are provided as tarballs, and include some scripts that help you build the software in what’s known as the “configure, make, make install” (or CMMI) procedure.

Note: For a great explanation (with examples you can follow to build your own CMMI package), George Brocklehurst wrote an excellent article explaining the details behind CMMI called “The magic behind configure, make, make install“.

First we’ll need to obtain the nginx tarball:

# wget http://nginx.org/download/nginx-1.7.12.tar.gz
# tar -zxvf nginx-1.7.12.tar.gz

Next we’ll look for a README or INSTALL file, to check for any extra build dependencies:

# cd nginx-1.7.12
# ls -l
total 660
-rw-r--r-- 1 jj   indiecity 249016 Apr  7 15:35 CHANGES
-rw-r--r-- 1 jj   indiecity 378885 Apr  7 15:35 CHANGES.ru
-rw-r--r-- 1 jj   indiecity   1397 Apr  7 15:35 LICENSE
-rw-r--r-- 1 root root          46 Apr 18 10:21 Makefile
-rw-r--r-- 1 jj   indiecity     49 Apr  7 15:35 README
drwxr-xr-x 6 jj   indiecity   4096 Apr 18 10:21 auto
drwxr-xr-x 2 jj   indiecity   4096 Apr 18 10:21 conf
-rwxr-xr-x 1 jj   indiecity   2478 Apr  7 15:35 configure
drwxr-xr-x 4 jj   indiecity   4096 Apr 18 10:21 contrib
drwxr-xr-x 2 jj   indiecity   4096 Apr 18 10:21 html
drwxr-xr-x 2 jj   indiecity   4096 Apr 18 10:21 man
drwxr-xr-x 2 root root        4096 Apr 18 10:23 objs
drwxr-xr-x 8 jj   indiecity   4096 Apr 18 10:21 src
# view README

We’ll note that, helpfully (cue eye roll) that nginx has put into the README:

Documentation is available at http://nginx.org

A more direct link gives us a little more useful information. Scanning this, there aren’t any obvious dependencies or features we want to add/enable, so we can proceed.

We can also find out which options are available by running ./configure --help.

Note: There are several configuration options that control where files are put when the compiled code is installed – they may be of use, in particular the standard --PREFIX. This can help segregate multiple versions of the same application on a system, for example if you need to install a newer/older version and already have one installed via the apt package. It is also useful to build self-contained directory structures that you can easily copy from one system to another.

Run ./configure, note any warnings or errors. There may be some modules or other things not found – that’s typically OK, but can help explain why an eventual error happened toward the end of the configure script or during compilation:

# cd nginx-1.7.12
# ./configure
...
checking for PCRE library ... not found
checking for PCRE library in /usr/local/ ... not found
checking for PCRE library in /usr/include/pcre/ ... not found
checking for PCRE library in /usr/pkg/ ... not found
checking for PCRE library in /opt/local/ ... not found
...

./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre=<path> option.

Whoa, we ran into a problem! For our use case (just showing off how to do a CMMI build in a PRoot environment) we probably don’t need the rewrite module, so we can re-run ./configure with the --without-http_rewrite_module switch.

However, it’s useful to understand how to track down dependencies like this, and rewriting is a pretty killer feature of any http server, so lets install the dependency.

The configure script mentions the “PCRE library”. PCRE stands for “Perl Compatible Regular Expressions”. Perl is a classical systems language that has hard-core text processing capabilities. It’s particularly known for its regular expression support and syntax. The Perl regular expression syntax is so useful in fact, that some folks built a library allowing other programmers to use it without having to use Perl itself.

Note: This information can be found by using your favorite search engine!

There are two ways libraries like PCRE are installed. The first, and easiest, is that a system package will be available with the library pre-compiled and ready to go. The second will require the same steps we’re following to install nginx – download a tarball, extract, and configure, make, make install.

To find a package, you can use apt-cache search or aptitude search.

I prefer aptitude, since it will tell us what packages are already installed:

# aptitude search pcre
v   apertium-pcre2                                     -                                                             
p   cl-ppcre                                           - Portable Regular Express Library for Common Lisp            
p   clisp-module-pcre                                  - clisp module that adds libpcre support                      
p   gambas3-gb-pcre                                    - Gambas regexp component                                     
p   haskell-pcre-light-doc                             - transitional dummy package                                  
p   libghc-pcre-light-dev                              - Haskell library for Perl 5-compatible regular expressions   
v   libghc-pcre-light-dev-0.4-4f534                    -                                                             
p   libghc-pcre-light-doc                              - library documentation for pcre-light                        
p   libghc-pcre-light-prof                             - pcre-light library with profiling enabled                   
v   libghc-pcre-light-prof-0.4-4f534                   -                                                             
p   libghc-regex-pcre-dev                              - Perl-compatible regular expressions                         
v   libghc-regex-pcre-dev-0.94.2-49128                 -                                                             
p   libghc-regex-pcre-doc                              - Perl-compatible regular expressions; documentation          
p   libghc-regex-pcre-prof                             - Perl-compatible regular expressions; profiling libraries    
v   libghc-regex-pcre-prof-0.94.2-49128                -                                                             
p   libghc6-pcre-light-dev                             - transitional dummy package                                  
p   libghc6-pcre-light-doc                             - transitional dummy package                                  
p   libghc6-pcre-light-prof                            - transitional dummy package                                  
p   liblua5.1-rex-pcre-dev                             - Transitional package for lua-rex-pcre-dev                   
p   liblua5.1-rex-pcre0                                - Transitional package for lua-rex-pcre                       
p   libpcre++-dev                                      - C++ wrapper class for pcre (development)                    
p   libpcre++0                                         - C++ wrapper class for pcre (runtime)                        
p   libpcre-ocaml                                      - OCaml bindings for PCRE (runtime)                           
p   libpcre-ocaml-dev                                  - OCaml bindings for PCRE (Perl Compatible Regular Expression)
v   libpcre-ocaml-dev-werc3                            -                                                             
v   libpcre-ocaml-werc3                                -                                                             
i   libpcre3                                           - Perl 5 Compatible Regular Expression Library - runtime files
p   libpcre3-dbg                                       - Perl 5 Compatible Regular Expression Library - debug symbols
p   libpcre3-dev                                       - Perl 5 Compatible Regular Expression Library - development f
p   libpcrecpp0                                        - Perl 5 Compatible Regular Expression Library - C++ runtime f
p   lua-rex-pcre                                       - Perl regular expressions library for the Lua language       
p   lua-rex-pcre-dev                                   - PCRE development files for the Lua language                 
v   lua5.1-rex-pcre                                    -                                                             
v   lua5.1-rex-pcre-dev                                -                                                             
v   lua5.2-rex-pcre                                    -                                                             
v   lua5.2-rex-pcre-dev                                -                                                             
p   pcregrep                                           - grep utility that uses perl 5 compatible regexes.           
p   pike7.8-pcre                                       - PCRE module for Pike                                        
p   postfix-pcre                                       - PCRE map support for Postfix       

See man aptitude for full details, but the gist is that p means the package is available but not installed, v is a virtual package that points to other packages, and i means the package is installed.

What we want is a package with header files and modules we can compile against – these are usually named lib[SOMETHING]-dev.

Scanning the list, we see a package named libpcre3-dev – this is probably what we want, we can find out by installing it:

# apt-get install libpcre3-dev

Now we can re-run ./configure and see if it works:

# ./configure
...
checking for PCRE library ... found
...
Configuration summary
  + using system PCRE library
  + OpenSSL library is not used
  + using builtin md5 code
  + sha1 library is not found
  + using system zlib library

  nginx path prefix: "/usr/local/nginx"
  nginx binary file: "/usr/local/nginx/sbin/nginx"
  nginx configuration prefix: "/usr/local/nginx/conf"
  nginx configuration file: "/usr/local/nginx/conf/nginx.conf"
  nginx pid file: "/usr/local/nginx/logs/nginx.pid"
  nginx error log file: "/usr/local/nginx/logs/error.log"
  nginx http access log file: "/usr/local/nginx/logs/access.log"
  nginx http client request body temporary files: "client_body_temp"
  nginx http proxy temporary files: "proxy_temp"
  nginx http fastcgi temporary files: "fastcgi_temp"
  nginx http uwsgi temporary files: "uwsgi_temp"
  nginx http scgi temporary files: "scgi_temp"

The library was found, the error is gone, and so now we can proceed with compilation.

To build nginx, we simply run make:

# make

If all goes well, then you can isntall it:

# make install

This same basic process can be used to build custom applications written in C/C++, to build applications that aren’t yet in the package repository, or build applications with specific features or optimizations enabled that the standard packages might not have.

Using Apt To Install Build Dependencies

One more useful thing that apt-get can do for us: it can install the build dependencies for any given package in the repository. This allows us to get most, if not all, potentially missing dependencies to build a known application.

We could have started off with our nginx exploration by first installing it’s build dependencies:

# apt-get build-dep nginx

This won’t solve every dependency issue, but it’s a useful tool in getting all of your ducks in a row for building, especially for more complex things like desktop applications.

Be careful with build-dep – it can bring in a lot of things, some you may not really need. In our case it’s not really a problem, but be aware of space limitations.

Umount and Clean Up

Once we’ve gotten our disk image configured as we like, we need to un-mount it.

First, we need to exit the bash shell we started with PRoot, then we’ll call sync to ensure all data is flushed to any disks:

# exit
$ sync

Now we can un-mount the partitions (the command is the same whether we’re using a disk image or an SD card):

$ sudo umount raspbian-root rasbian-boot

We can double-check that the disk is no longer mounted by calling mount without any additional parameters, or using lsblk

$ mount
...

With lsblk, we’ll still see the disks (or loopback devices) present, but not mounted:

$ lsblk
NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda         8:0    0 14.9G  0 disk 
└─sda1      8:1    0 14.9G  0 part /
sdc         8:32   0 29.8G  0 disk 
└─sdc1      8:33   0 29.8G  0 part /run/media/jj/STEALTH
loop0       7:0    0  3.1G  0 loop 
├─loop0p1 254:0    0   56M  0 part 
└─loop0p2 254:1    0    3G  0 part 

If we’re using a disk image, we’ll want to destroy the loopback devices. This is accomplished with kpartx -d:

$ sudo kpartx -d 2015-02-16-raspbian-wheezy.img

We can verify that it’s gone using lsblk again:

$ lsblk
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

At this point we can write the disk image to an SD card, or eject the SD card and insert it into a Raspberry Pi.

Writing a Disk Image to an SD Card

We’ll use the dd command, which writes raw blocks of data from one block device to another, to copy the disk image we made into an SD card.

NOTE: The SD card you use will be COMPLETELY erased. Proceed with caution.

First, insert the SD card into your computer (or card reader, etc). Depending on your system, it may be automatically mounted. We can find out the device name and if its mounted using lsblk:

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0  14.9G  0 disk 
└─sda1   8:1    0  14.9G  0 part /
sdb      8:16   1  14.9G  0 disk 
├─sdb1   8:17   1 114.3M  0 part 
├─sdb2   8:18   1     1K  0 part 
└─sdb3   8:19   1    32M  0 part /run/media/jj/SETTINGS
sdc      8:32   0  29.8G  0 disk 
└─sdc1   8:33   0  29.8G  0 part /run/media/jj/STEALTH

We can see the new disk came up as sdb. It has three partitions, sdb1, sdb2, and sdb3. Looking at the MOUNTPOINT column, we can tell that my operating system auto-mounted sdb3 into the /run/media/jj/SETTINGS directory.

Note: The partition layout may vary depending on what was on the SD card before you inserted it. My SD card had a fresh copy of NOOBS that hadn’t yet installed an OS.

We can double-check that sdb is the right disk with fdisk:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000cb53d

Device     Boot    Start      End  Sectors   Size Id Type
/dev/sdb1           8192   242187   233996 114.3M  e W95 FAT16 (LBA)
/dev/sdb2         245760 31225855 30980096  14.8G 85 Linux extended
/dev/sdb3       31225856 31291391    65536    32M 83 Linux

fdisk tells us that this is a 16GB drive. The exact amount cited by some drive manufacturers is not in “real” gigabytes, an exponent of 2[*] but in billions of bytes – note the byte count: 16,021,192,704.

We can see the three partitions, and what format they are in. The small FAT filesystem is a good indication that this is a bootable Raspberry Pi disk.

With a fresh SD card, the call to fdisk may look more like this:

Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdb1        8192 31291391 31283200 14.9G  c W95 FAT32 (LBA)

Most SD cards are pre-formatted with a single partition containing a FAT32 filesystem.

It’s important to be able to differentiate between your system drives and the target for copying over your disk image – if you point dd at the wrong place, you can destroy important things, like your operating system!

Now that we’re sure that /dev/sdb is our SD card, we can proceed.

Since lsblk indicated that at least one of the partitions was mounted (sdb3), we will fist need to un-mount it:

$ sudo umount /dev/sdb3

Now we can verify it’s indeed not mounted:

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0  14.9G  0 disk 
└─sda1   8:1    0  14.9G  0 part /
sdb      8:16   1  14.9G  0 disk 
├─sdb1   8:17   1 114.3M  0 part 
├─sdb2   8:18   1     1K  0 part 
└─sdb3   8:19   1    32M  0 part 
sdc      8:32   0  29.8G  0 disk 
└─sdc1   8:33   0  29.8G  0 part /run/media/jj/STEALTH

And copy the disk image:

$ sudo dd if=2015-02-16-raspbian-wheezy.img of=/dev/sdb bs=4M
781+1 records in
781+1 records out
3276800000 bytes (3.3 GB) copied, 318.934 s, 10.3 MB/s

This will take some time, and dd gives no output until it’s finished. Be patient.

dd has a fairly simple interface. The if option indicates the in file, or the disk (or disk image in our case) that is being copied. The of option sets the out file, or the disk to write to. bs sets the block size, which indicates how big of a piece of data to write at a time.

The bs value can be tweaked to get faster or more reliable performance in various situations – we’re using 4M (four megabytes) as recommended by raspberrypi.org. The larger the value, the faster dd will run, but there are physical limits to what your system can handle, so it’s best to stick with the recommended value.

So dd gives us no output until it’s completed. This is kind of an annoying thing about dd but it can be remedied. The easiest way is to install a tool called pv, and split the command – pv acts as an intermediary between two commands and displays a progress bar as it moves along. dd can read and write data to a pipe (details). So we can use two dd commands, put pv in the middle, and get a nice progress bar.

Here’s the same copy as before, but using pv:

Note: Here we’re using sh -c to wrap the command pipeline in quotes. This allows us to provide the entire pipeline as a single unit. If we didn’t, the shell would interpret the first pipe in the pipeline as part of the call to sudo, and not what we want to run as root.

$ ls -l 2015-02-16-raspbian-wheezy.img 
-rw-r--r-- 1 jj jj 3276800000 Apr 18 07:58 2015-02-16-raspbian-wheezy.img
$ sudo sh -c "dd if=2015-02-16-raspbian-wheezy.img bs=4M | pv --size=3276800000 | dd of=/dev/sdb"
 613MiB 0:02:31 [4.22MiB/s] [===========>                                                      ] 19% ETA 0:10:04
# exit

We pass pv a --size argument to give it an idea of how big the file is, so it can provide accurate progress. We found out the size of our disk image using ls -l., which shows the size of the file in bytes.

If we run lsblk again, we’ll see the different partition arrangement now on sdb:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0 14.9G  0 disk 
└─sda1   8:1    0 14.9G  0 part /
sdb      8:16   1 14.9G  0 disk 
├─sdb1   8:17   1   56M  0 part 
└─sdb2   8:18   1    3G  0 part 
sdc      8:32   0 29.8G  0 disk 
└─sdc1   8:33   0 29.8G  0 part /run/media/jj/STEALTH

fdisk -l gives a bit more detail:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 14.9 GiB, 16021192704 bytes, 31291392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0009bf4f

Device     Boot  Start     End Sectors Size Id Type
/dev/sdb1         8192  122879  114688  56M  c W95 FAT32 (LBA)
/dev/sdb2       122880 6399999 6277120   3G 83 Linux

Now we can sync the disks:

$ sync

At this point we have an SD card we can put into a Raspberry Pi and boot.

[*] (1GB = 1 byte * 1024 (kilobyte) * 1024 (megabyte) * 1024, or 1,073,741,824 bytes)

Extra Credit: Making our own disk image

Some distributions, such as Arch, don’t distribute disk images, but instead distribute tarballs of files. They let you set up the disk however you want, then copy the files over to install the operating system.

We can create our own disk images using fallocate, and then use fdisk or parted (or if you prefer a GUI, gparted) to partition the disk.

We’ll create a disk image for the latest Arch Linux ARM distribution for the Raspberry Pi 2.

Note: You must create the disk image file on a compatible filesystem, such as ext4, for this to work. This is the default system disk filesystem for most modern Linux distributions, including Arch and Ubuntu, so for most people this isn’t a problem. The implication is that this will not work on, say, an external hard drive formatted in an incompatible format, such as FAT32.

First we’ll create an 8 gigabyte empty disk image:

$ fallocate -l 8G arch-latest-rpi2.img

We’ll use fdisk to partition the disk. We need two partitions. The first will be 100 megabytes, formatted as FAT32. We’ll need to set the partition’s system id to correspond to FAT32 with LBA so that the Raspberry Pi’s BIOS knows how to read it.

Note: I’ve had trouble finding documentation as to exactly why FAT + LBA is required, the assumption is it has something to do with how the ARM processor loads the operating system in the earliest boot stages; if anyone knows more detail or can point me to the documentation about this, it would be greatly appreciated!

The offset for the partition will be 2048 blocks – this is the default that fdisk will suggest (and what the Arch installation instructions tell us to do).

Note: This seems to work well- however, there is some confusion about partition alignment. The Raspbian disk images use a 8192 block offset, and there is a lot of information available explaining how a bad alignment can cause quicker SD card degradation and hurt write performance. I’m still trying to figure out the best way to address this, this is another area where community help would be appreciated :) Here are a few links that dig into the issue: http://wiki.laptop.org/go/How_to_Damage_a_FLASH_Storage_Device, http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/, http://3gfp.com/wp/2014/07/formatting-sd-cards-for-speed-and-lifetime/.

The second partition will be ext4, and use the rest of the the available disk space.

We’ll start fdisk and get the initial prompt. No changes will be saved until we instruct fdisk to do so:

$ fdisk arch-latest-rpi2.img
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x152a22d4.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help):

Most of the information here is just telling us that this is a block device with no partitions. If you need help, as indicated, you can type m:

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

First, we need to create a new disk partition table. This is done by entering o:

Command (m for help): o
Building a new DOS disklabel with disk identifier 0xa8e8538a.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Next, we’ll create our first primary partition, the boot partition, at 2048 blocks offset, 100MB in size.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-16777215, default 2048): 2048
Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215): +100M

By using the relative number +100M, we save ourselves some trouble of having to do math to figure out how many sectors we need.

We can see what we have so far, by using the p command:

Command (m for help): p

Disk arch-latest-rpi2.img: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa8e8538a

               Device Boot      Start         End      Blocks   Id  System
arch-latest-rpi2.img1            2048      206847      102400   83  Linux

Next, we need to set the partition type (system id) by entering t:

 
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT
1e  Hidden W95 FAT1 80  Old Minix
Hex code (type L to list codes): c
Changed system type of partition 1 to c (W95 FAT32 (LBA))

After the t command, we opted to enter L to see the list of possible codes. We then see that W95 FAT32 (LBA) corresponds to the code c.

Now we can make our second primary partition for data storage, utilizing the rest of the disk. We again use the n command:

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): p
Partition number (1-4, default 2): 2
First sector (206848-16777215, default 206848):
Using default value 206848
Last sector, +sectors or +size{K,M,G} (206848-16777215, default 16777215):
Using default value 16777215

We accepted the defaults for all of the prompts.

Now, entering p again, we can see the state of the partition table:

Command (m for help): p

Disk arch-latest-rpi2.img: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xa8e8538a

               Device Boot      Start         End      Blocks   Id  System
arch-latest-rpi2.img1            2048      206847      102400    c  W95 FAT32 (LBA)
arch-latest-rpi2.img2          206848    16777215     8285184   83  Linux

Now we can write out the table (w), which will exit fdisk:

Command (m for help): w
The partition table has been altered!


WARNING: If you have created or modified any DOS 6.x
partitions, please see the fdisk manual page for additional
information.
Syncing disks.

Now we need to format the partitions. We’ll use kpartx to create block devices for us that we can format:

$ sudo kpartx -av arch-latest-rpi2.img
add map loop0p1 (252:0): 0 204800 linear /dev/loop0 2048
add map loop0p2 (252:1): 0 16570368 linear /dev/loop0 206848

As we saw earilier, the devices will show up in /dev/mapper, as /dev/mapper/loop0p1 and /dev/mapper/loop0p2.

First we’ll format the boot partition loop0p1, as :

$ sudo mkfs.vfat /dev/mapper/loop0p1
mkfs.fat 3.0.26 (2014-03-07)
unable to get drive geometry, using default 255/63

Next the data partition, in ext4 format:

$ sudo mkfs.ext4 /dev/mapper/loop0p2
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
518144 inodes, 2071296 blocks
103564 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2122317824
64 block groups
32768 blocks per group, 32768 fragments per group
8096 inodes per group
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

At this point we just need to mount the new filesystems, download the installation tarball and use tar to extract and copy the files:

First we’ll grab the installation files:

$ wget http://archlinuxarm.org/os/ArchLinuxARM-rpi-2-latest.tar.gz

Next we’ll mount the new filesystems:

$ mkdir arch-root arch-boot
$ sudo mount /dev/mapper/loop0p1 arch-boot
$ sudo mount /dev/mapper/loop0p2 arch-root

And finally populate the disk image with the system files, and move the boot directory to the boot partition:

$ sudo tar -xpf ArchLinuxARM-rpi-2-latest.tar.gz -C arch-root
$ sync
$ sudo mv arch-root/boot/* arch-boot/

We’re using a few somewhat less common parameters for tar. Typically we’ll use -xvf to tell tar to extract (-x), be verbose (-v) and specify the file (-f). We’ve added the -p switch to preserve permissions. This is especially important with system files.

The -C switch tells tar to change to the arch-root directory before extraction, effectively extracting the files directly to the root filesystem.

You may see some warnings about extended header keywords, these can be ignored.

Now we just need to clean up (unmount, remove the loopback devs):

$ sudo umount arch-root arch-boot
$ sudo kpartx -d arch-latest-rpi2.img

Now we’ve got our own Arch disk image we can distribute, or copy onto SD cards. We can also mount it on the loopback and use PRoot to further configure it, as we did above with Raspbian.

Where To Go From Here

With this basic workflow, we can do all sorts of interesting things. A few ideas:

  • Distribute disk images pre-configured with applications we created.
  • Pre-configure images and SD cards for use in classrooms, meetups, demos, etc.
  • Set up a cron job that runs nightly and creates a disk image with the latest packages.
  • Build our own packages (either just create tarballs or use a tool like FPM and build deb packages) for drivers and other software and save other folks the hassle of doing this themselves.
  • Create rudimentary disk duplication setups for putting one image on a bunch of SD cards.
  • Fix broken installs.
  • Construct build and testing systems; integrate with tools like Jenkins.

So there we go – now you can customize the Raspberry Pi operating system with impunity, on your favorite workstation or laptop machine. If you have any questions, corrections, or suggestions for ways to streamline the process, please leave a comment!


Tim HopperUpdated About Me Page

I just gave my About Me page a long over due update.

Astro Code SchoolAstro Launch Party

RSVP: astro-caktus.eventbrite.com
What: Astro Code School Launch Party
Where: 108 Morris Street, Suite 1B, Durham, NC 27705
When: May 1, 2015, 6pm - 9pm

You are invited to the Astro Code School launch party! We’ll have light refreshments and opportunities to meet the fine folks at Astro and Caktus Consulting Group. Come learn more about the first full-time code school to specialize in Python and Django on the East Coast!

Please RSVP at the URL above. I hope you can make it!

Astro Code SchoolFULL Class Syllabus for Python & Django Web Engineering

A day by day full class syllabus with a lot more information about what you can learn in our Python & Django Web Engineering class is now available. It's now all on it's own page. (BTW, we call this class BE 102. BE stands for Back End. It's a formal name to differentiate it from other classes we plan on providing.)

The deadline to apply for our first class is May 11. If you're interested please head on over to the Apply page and fill out the form.

Thanks!

Caktus GroupEpic Allies Team Members to Speak at Innovate your Cool

The Art of Cool festival is a staple of spring happenings in the Triangle. A three-day festival to present, promote, and preserve jazz and jazz-influenced music, The Art of Cool always promises to be a great time for those interested in music, art, and delicious food from Durham’s many food trucks. But what does music have to do with programming and app development? This year, Caktus Group is helping to sponsor a new portion of the festival called Innovate Your Cool. Innovate Your Cool celebrates the power of cool ideas, advancing innovative thinking by bringing together intelligent people with radically new ideas.

Not only is Caktus helping to sponsor the event, but our very own Digital Health Product Manager NC Nwoko will be giving a lightning talk on “Hacking HIV Stigma with Game Apps” with Assistant Professor at UNC Gillings Scool of Global Public Health Kate Muessig. Both Kate and NC are part of the team of intelligent people working on the Epic Allies gaming app for young men and teens who are HIV positive.

The Epic Allies project, originally begun in 2012 in collaboration with Duke and UNC, is a gaming app that seeks to make taking HIV medication—as well as creating and maintaining healthy habits—fun. The app uses games and social networking to reinforce drug adherence, thereby lowering viral loads and curbing the spread of HIV. It is an innovative mHealth solution for a high-risk population in critical need, an ideal topic for the Innovate Your Cool conference.

Also present will be the keynote speaker, Wayne Sutton, talking about diversity in the fields of tech and innovation. Other topics of the event will include startup culture, innovation on “Black Wall Street,” community and economic development, and a panel discussion on Code the Dream. Dr. Chris Emdin of #hiphoped will also be leading a hackathon combining science and hip hop and geared towards high school aged students.

Innovate your Cool is Saturday, April 25th from 10 am to 4pm and will be hosted at American Underground. Register today. We can’t wait!

Caktus GroupFrom Intern to Professional Developer

Quite often, people undertake internships at the beginning of their career. For me, it was a mid-career direction change and a leap of faith. In order to facilitate this career move, I took a Rails Engineering class at The Iron Yard in the fall of 2014. I had limited experience as a developer and no experience in Django prior to my internship at Caktus. Because of the structure and support Caktus provided and my enthusiasm for becoming a developer, my internship turned out to be the ideal way for me to make the transition from a novice to a professional developer.

What I Expected

When I chose to make this career shift, I read and thought a great deal about the challenges I might face due to my age and gender. I had minimal apprehensions about coding itself. I like math and languages, and I’m a good problem-solver. My concerns were about how I would navigate a new industry at this point in my life.

While I had general concerns about making this leap, I was sure Caktus was the place I wanted to try it. When I was in code school, I met Caktus employees and saw some of the work they do, particularly SMS apps in the developing world. It was clear that Caktus’ values as a company align well with mine. They are principled, creative people whose apps make significant and sustainable positive impact on people’s lives. I was excited to be part of a team whose work I supported so wholeheartedly.

What Caktus did

Caktus did a number of things, both consciously and subconsciously, to create a welcoming and supportive environment in which I could learn and succeed. The sexism and ageism that is allegedly rampant in tech is notably absent at Caktus. My co-workers understood that I was a capable but inexperienced developer. They were all eager to share their knowledge and help without making any assumptions about me or my abilities. Sharing knowledge cooperatively is standard operating procedure throughout Caktus, and I think it’s one of the reasons the company is so successful.

Something Caktus did, very deliberately, to help me was to provide me with a female mentor, Karen Tracey, Lead Developer and Technical Manager at Caktus. While any of the developers at Caktus would make great mentors, pairing me with a woman who has worked as a developer for her entire career was incredibly valuable. Karen provided me with thoughtful guidance and insight gained from her experience. She was able to guide me in career choices, goal setting, and on navigating an industry that can be very unwelcoming to women. She showed me that I can succeed and be happy in this industry and, more importantly, helped me figure out how. She also helped me strategize about how I can open doors for others in this industry, particularly those from groups underrepresented in tech. That’s a personal goal of mine, and I one I know I will find support from Caktus in pursuing.

What Rob did

Caktus provided additional support in the form of another co-worker, Rob Lineberger who worked very closely and patiently with me on coding itself. We worked on a real client project, and Rob was very good at scaling work for me so that I could experience some challenges and some accomplishments each day. When I was stuck on a problem, Rob intuited what conceptual background I needed to move forward. He walked me through problems so that I would be able to use the skills and knowledge I was acquiring in the future when I was working on a problem on my own. Working with Rob on this project ended up being a series of lessons in the fundamentals of web development that, in the end, gave me a broad and useful toolbox to use after the internship.

What I did

Because the project was well managed, I was able to work on a variety of different pieces in order to get a really good sense of how a Django app works. One piece of which I took significant ownership was a routing app that communicated with the google directions API. This app in particular required that I explore Javascript and JQuery in a confined, practical context, a very useful opportunity for me to expand my skills. Having discrete, challenging, yet attainable assignments like this created an ideal learning experience, and I was able to produce code that was demonstrated to the client.

In addition to this app, I worked with tables, database logic, and testing, all essential to understanding how Django apps work. I gained knowledge and confidence, and I had a lot of fun coding and getting to know my co-workers professionally and personally. The experience allowed me to see myself as a developer, more specifically as a developer at Caktus. Happily, Caktus saw me the same way, and I am thrilled to continue as a full-time developer with this passionate, dedicated, and inspiring group of people.

Astro Code SchoolAstro at PyCon 2015

Hello from Montréal, QC! We're here participating in the annual North American 2015 Python Conference.

So far Caleb has helped out at the Django Girls Workshop with three other Caktus Group colleagues.

Caleb teaching at the Django Girls workshop at PyCon2015

I went to the PyCon Education Summit. Great to see folks from around the world, including North Carolina, share cutting edge education ideas. Lots of amazing K-12 and University examples of how Python is teaching programming.

Caleb teaching at Django Girls Workshop at PyCon 2015

We're now hanging out at the Expo telling folks from around the world about Durham and our school. So far I've met people from Poland, Canada, India, Hawaii, and lots of US States. Very fun to represent for North Carolina.

Frank WierzbickiJython 2.7 release candidate 2 available!

On behalf of the Jython development team, I'm pleased to announce that the second release candidate of Jython 2.7 is available! We've now fixed the windows installer issues from rc1. I'd like to thank Amobee for sponsoring my work on Jython. I'd also like to thank the many contributors to Jython.

Please see the NEWS file for detailed release notes. This release of Jython requires JDK 7 or above.

This release is being hosted at maven central. There are three main distributions. In order of popularity:
To see all of the files available including checksums, go here and navigate to the appropriate distribution and version.

Frank WierzbickiJython 2.7 release candidate 1 available!

[Update: on Windows machines the installer shows an error at the end. The installer needs to be closed manually, but then the install should still work. We will fix this for rc2.]

On behalf of the Jython development team, I'm pleased to announce that the first release candidate of Jython 2.7 is available! We're getting very close to a real release finally! I'd like to thank Amobee for sponsoring my work on Jython. I'd also like to thank the many contributors to Jython.

Jython 2.7rc1 brings us up to language level compatibility with the 2.7 version of CPython. We have focused largely on CPython compatibility, and so this release of Jython can run more pure Python apps then any previous release. Please see the NEWS file for detailed release notes. This release of Jython requires JDK 7 or above.

This release is being hosted at maven central. There are three main distributions. In order of popularity:
To see all of the files available including checksums, go here and navigate to the appropriate distribution and version.

Caktus GroupHow to Find Cakti at PyCon 2015

We’re very excited for PyCon 2015 and can’t wait for the fun to begin. Working on the PyCon website stoked our excitement early, so it’s almost surreal that PyCon is finally here. With an overwhelming number of great events, we wanted to highlight ones Caktus and our staff will be taking part in. Below you’ll find a list of where we’ll be each day. Please join us!

Wednesday: Building an SMS App with Django (3:30pm)

Ever wanted to build an SMS app? It’s UNICEF’s tool of choice for reaching the most remote and under-resourced areas in the world. Our team (Mark Lavin, David Ray, Caleb Smith) can walk you through the process.

Thursday: DjangoGirls Workshop (9am)

We’re a proud DjangoGirls sponsor. For this workshop, Mark Lavin and Karen Tracey, the leaders of our development team, David Ray, and Astro Code School lead instructor Caleb Smith, will act as TAs and help participants create their first Django app.

Thursday: O’Reilly Book Signing and Opening Reception (6pm)

Mark Lavin will be signing 25 free(!) copies of his O’Reilly book, Lightweight Django. Stop on by while you can— it’s first come first serve. You’ll also find many of us at the reception itself.

Friday - Saturday: Tradeshow

Stop by our tradeshow booth. You can also visit our latest venture, Astro Code School at their own booth. Bring a friend and have a showdown with our Ultimate Tic Tac Toe game (you’ll get some pretty sweet stickers too). We’ll also have daily giveaways like Mark’s Lightweight Django and some mini-quadcopters.

Saturday: PyLadies Auction (6pm)

There’s some fantastic art and other items being offered during the PyLadies auction. Caktus is contributing a framed piece showing early concept art and sketches for PyCon 2015.

Sunday: Job Fair

Do you want to join our team and become a part of the nation’s largest Django firm? Then please come by our booth at the job fair. We’d love to talk to you about ways you can grow with Caktus.

The Whole Thing: Outings with Duckling (24/7)

We cannot neglect to mention the giant duck. You’ll find our duck, nicknamed Quaktus, standing next to the "Meet here for Outings" sign. We built Duckling.us to help people create impromptu get togethers during PyCon. You can use the app to figure out where everyone is going for dinner, drinks, etc. and join in the fun.

Astro Code SchoolPyCon 2015 : See You in Montreal!

Caleb Smith and I are going to Montréal, Quebec, Canada next week for PyCon 2015! It's a huge conference all about the open-source Python programming language. Python is a big part of what we teach here at Astro Code School.

We’ll be at booth #613 in Exhibit Hall 210 in the Palais des Congres. Please come look for us. We’ll have the usual swag like t-shirts for women and men. PLUS we’ll have the very addictive game Ultimate Tic Tac Toe. Play against one another on our big touch screen. It’s harder than it sounds. Will you be a Ultimate Tic Tac Toe champion? Can we win more games than Caktus Group?

Caleb is co-presenting with our Caktus colleagues on Wednesday April 8 from 3:30 p.m. - 5 p.m on Building SMS Applications with Django. He’s also coaching at the Django Girls Workshop April 9. No programming experience required. Just bring a laptop and some energy to learn. You’ll be going through the awesome Django Girls tutorial.

I’ll be attending the Python Education Summit. I’m really looking forward to learning more from other professional and amateur python educators. The talk schedule looks nice!

Are you going to PyCon 2015? What parts of PyCon 2015 are you looking forward too? Tutorial Days, Lightening Talks, or Dev Sprints? Let us know by tweeting at us @AstroCodeschool.

Caktus GroupDiamondHacks 2015 Recap

Image via Diamond Hacks Facebook Page

This past weekend, Technical Director Mark Lavin came out to support DiamondHacks, NCSU’s first ever hackathon and conference event for women interested in computer science. Not only is NCSU Mark’s alma mater, but he’s also a strong supporter of co-organizer Girl Develop It RDU (GDI), of which Caktus is an official sponsor.

The weekend’s events began Saturday with nervous excitement as Facebook developer Erin Summers took the stage for her keynote address. Most memorable for Mark was a moment towards the end of Summers’ talk, in which she called for collaboration between neighboring audience members. It was at this point Mark first realized he was the only male in the room—a unique experience for a male developer. “I’m sure there’s lots of women who have felt the way I did,” Mark commented. The moment not only flipped the norms of a traditionally male-dominated field, but also filled Mark with a renewed appreciation for the importance of active inclusivity in the tech industry.

Aside from helping fill swag bags for the weekend’s participants and attending several of the talks, Mark gave a lightning talk, “Python and Django: Web Development Batteries Included” on Python and Django. Knowing attendees would be thinking about their upcoming projects and which language to build in, Mark chose to advocate for Django (he’s a little biased as the co-author of Lightweight Django). He highlighted the overall uses of Python as well as the extensiveness of its standard library. According to Mark, “Python comes with a lot of built-in functionality,” so it’s a great coding language for beginning developers. Mark also covered the basic Django view and model in his talk, emphasizing the features that make Django a complete framework—an excellent language for a hackathon.

Since supporting diversity in the tech industry was a key focus of the day, Mark also wanted to emphasize the inclusiveness of the Python and Django communities. From the diversity statement on Python’s website, to Django’s code of conduct, the Python community and Django subcommunity have been at the forefront of advocating for diversity and inclusion in the tech world. For Mark, this element has and continues to be “important for the growth of [the] language,” and has contributed to the vitality of these communities.

All in all the weekend was a great success, with especially memorable talks given by speakers working for Google, Trinket, and Hirease. Mark was impressed with the students’ enthusiasm and focus and lingered after both of iterations his talk to speak with attendees about their careers and interests. The next day he was equally affected by the range and talent behind Sunday’s hackathon projects as he followed the progress of various teams on Twitter. “These are the students [who] are going to help define what’s next,” he remarked.

Can’t get enough of Python, Django, and the talented Mark Lavin? Neither can we. Mark will be leading a workshop at PyCon on Building SMS Applications with Django along with fellow Cakti David Ray and our code school’s lead instructor, Caleb Smith. We’ll hope to see you there!

Tim HopperParsley the Recipe Parser

A few years ago, I created a Github repo with only a readme for a project I was hoping to start. The project was a tool for parsing ingredients from cooking recipes. I never did start this project, and I just decided to delete the Github repository. What follows is the README file I had written.

The parser should take in an unstructured ingredient recipe string and output a structured version of the ingredient.

In particular, we follow the structure described by Rahul Agarwal and Kevin Miller in a Stanford CS 224n class project. They four aspects of an ingredient (bullets quoted directly): AMOUNT: Defines the quantity of some ingredient. Does not refer to lengths of time, sizes of objects, etc. UNIT : Specifies the unit of measure of an ingredient. Examples include "cup", "tablespoons", as well as non-standard measures such as "pinch". INGREDIENT: The main food word of an item that is mentioned in the ingredient list. Groups or transformations of sets of ingredients (such as “dough”) do not fall into this category DESCRIPTION: A word or phrase that modifies the type of food mentioned, such as the word "chopped".

For example, the ingredient string

1 teaspoon finely chopped, peeled fresh ginger

will be parsed as follows:

  • AMOUNT: 1
  • UNIT : tsp
  • INGREDIENT: ginger
  • DESCRIPTION: finely chopped, peeled

and

2 (11 ounce) can mandarin orange segments, drained

will/might be parsed as:

  • AMOUNT: 22
  • UNIT : oz
  • INGREDIENT: mandarin orange segments
  • DESCRIPTION: drained

Astro Code SchoolMeet Caktus CTO Colin Copeland

This is the third post in a series of interviews about the people at Astro Code School. This one is about Colin Copeland the CTO and Co-Founder of Caktus Consulting Group. He’s one of the people who came up with the idea for Astro Code School and a major contributor to it's creation.

Where were you born?

Oberlin, Ohio

What was your favorite childhood pastime?

Spending time with friends during the Summer.

Where did you go to college and what did you study?

I went to Earlham College and studied Computer Science.

How did you become a CTO of the nation's largest Django firm?

I collaborated with the co-founders on a software engineering project. We moved to North Carolina to start the business. I was lucky to have met them!

How did you and the other Caktus founders come up with the idea to start Astro Code School?

Caktus has always been involved with trainings and trying to contribute back to the Django community where possible, from hosting Django sprints to leading public and private Django trainings on best practices. We're excited to see the Django community grow and saw an opportunity to focus our training services with Astro.

What is one of your favorite things about Python?

Readability. Whether it's reading through some of my old source code or diving into a new open source project, I feel like you can get your bearings quickly and feel comfortable learning or re-learning the code. The larger Django and Python communities are also very welcoming and friendly to new and long time members.

Who are your mentors and how have they influenced you?

So many, but especially my Caktus business partners and colleagues.

Do you have any hobbies?

I'm co-captain of the Code for Durham Brigade.

Which is your favorite Sci-fi or Fantasy fiction? Why?

Sci-fi. I've always loved the books Neuromancer and Snow Crash. Recently I've been enjoying the Silo science fiction series.

Caktus GroupWelcome to Our New Staff Members

We’ve hit one of our greatest growth points yet in 2015, adding nine new team members since January to handle our increasing project load. There are many exciting things on the horizon for Caktus and our clients, so it’s wonderful to have a few more hands on deck.

One of the best things about working at Caktus is the diversity of our staff’s interests and backgrounds. In order of their appearance from left to right in the photos above, here’s a quick look at our new Cakti’s roles and some fun facts:

Neil Ashton

Neil was also a Caktus contractor who has made the move to full-time Django developer. He is a keen student of more than programming languages; he holds two degrees in Classics and another Master’s in Linguistics.

Jeff Bradberry

Though Jeff has been working as a contractor at Caktus, he recently became a full-time developer. In his spare time, he likes to play around with artificial intelligence, sometimes giving his creations a dose of inexplicable, random behavior to better mimic us poor humans.

Ross Pike

Ross is our new Lead Designer and has earned recognition for his work from Print, How Magazine, and the AIGA. He also served in the Peace Corps for a year in Bolivia on a health and water mission.

Lucas Rowe

Lucas joins us for six months as a game designer, courtesy of a federal grant to reduce the spread of HIV. When he’s not working on Epic Allies, our HIV medication app, he can be found playing board games or visiting local breweries.

Erin Mullaney

Erin has more than a decade of development experience behind her, making her the perfect addition to our team of Django developers. She loves cooking healthy, vegan meals and watching television shows laden with 90s nostalgia.

Liza Chabot

Liza is an English major who loves to read, write, and organize, all necessary skills as Caktus’ Administrative and Marketing Assistant. She is also a weaver and sells and exhibits her handwoven wall hangings and textiles in the local craft community.

NC Nwoko

NC’s skills are vast in scope. She graduated from UNC Chapel Hill with a BA in Journalism and Mass Communication with a focus on public relations and business as well as a second major in International Studies with a focus on global economics. She now puts this experience to good use as Caktus’ Digital Health Product Manager, but on the weekends you can find her playing video games and reading comic books.

Edward Rowe

Edward is joining us for six months as a game developer for the Epic Allies project. He loves developing games for social good. Outside of work, Edward continues to express his passion for games as an avid indie game developer, UNC basketball fan, and board and video game player.

Rob Lineberger

Rob is our new Django contractor. Rob is a renaissance man; he’s not only a skilled and respected visual artist, he’s trained in bioinformatics, psychology, information systems and knows his way around the kitchen.

To learn more about our team, visit our About Page. And if you’re wishing you could spend your days with these smart, passionate people, keep in mind that we’re still hiring.

Tim HopperAuto Deploying Stigler Diet with Travis CI

I've been using Travis CI for automated testing at work for the last year. It never occurred to me that it could be used to deploy a static website.

Greg Reda wrote a great post on using Travis to automatically build his site and deploy it to S3 every time he pushes to Github. I borrowed his brilliance to implement the same technique here.

Tim HopperSundry Links for March 23, 2015

Start Using Landsat on AWS: "The Landsat program has been running since 1972 and is the longest ongoing project to collect such imagery. Landsat 8 is the newest Landsat satellite and it gathers data based on visible, infrared, near-infrared, and thermal-infrared light. … You can now access over 85,000 Landsat 8 scenes" on AWS.

Beginner's Guide to Linkers: I’m getting back into doing a little C++ programming. Having spent the last 5 years in scripting languages, this was a helpful refresher on compilation.

How to Auto-Forward your Gmail Messages in Bulk: Use Google App Scripts to autoforward emails by simply adding a label. I use this to add things to my Omnifocus task link.

Which one result in mathematics has surprised you the most?: On Mathematics Stack Exchange. It might have been Huffman Coding for me.

Ruby Midwest 2013 The Most Important Optimization: Happiness: Ernie Miller explains why he doesn’t let his career trump his happiness.

Sake by tonyfischetti: Something of a modern GNU Make: "Sake is a way to easily design, share, build, and visualize workflows with intricate interdependencies. Sake is self-documenting because the instructions for building a project also serve as the documentation of the project's workflow."

n1k0/SublimeHighlight: "An humble SublimeText package for exporting highlighted code as RTF or HTML."

Caktus GroupAstro Code School Now Accepting Applications - Intermediate Django + Python

 

Code

 

I'm really happy to officially announce the first Python and Django Web Engineering class at Astro Code School. I’ll outline some details here and you can also find them on our classes page.

This class is twelve weeks long and full time Monday to Friday from 9 AM – 5 PM. It'll be taught here at the Astro Code School at 108 Morris Street, Suite 1b, Durham, NC. We will conduct two Python and Django Web Engineering classes in 2015. The first one in term two starts May 18, 2015 and ends August 10, 2015. The second one in term three starts September 22, 2015 and ends December 15, 2015.

Enrollment for both sections opens today March 20. There is space for twelve students in each class. More information about the enrollment process is on our Apply page. Part of that process is an entrance exam that is designed to ensure you're ready to succeed. The price per person for Python and Django Web Engineering is $12,000.

The Python and Django Web Engineering class is intended for intermediate level students. Its goal is to help you start your career as a backend web engineer. To start down this path we recommend you prepare yourself. A few things you can do are: read some books on Python & Django, complete the Django Girls tutorial, watch videos on Youtube, and take an online class or two in Python.

 

 

Python and Django make a powerful team to build maintainable web applications quickly. When you take this course you will build your own web application during lab time with assistance from your teacher and professional Django developers. You’ll also receive help preparing your portfolio and resume to find a job using the skills you’ve learned.

Here's the syllabus:

  1. Python Basics, Git & GitHub, Unit Testing
  2. Object Oriented Programming, Functional Programming, Development Process, Command Line
  3. HTML, HTTP, CSS, LESS, JavaScript, DOM
  4. Portfolio Development, Intro to Django, Routing, Views, Templates
  5. SQL, Models, Migrations, Forms, Debugging
  6. Django Admin, Integrating Apps, Upgrading Django, Advanced Django
  7. Ajax, JavaScript, REST
  8. Linux Admin, AWS, Django Deployment, Fabric
  9. Interviewing Skills, Computer Science Topics, Review
  10. Final Project Labs
  11. Final Project Labs
  12. Final Project Labs

 

This comprehensive course is taught by experienced developer and trained teacher Caleb Smith. He's been working full time at Caktus Consulting Group, the founders of Astro Code School and the nation’s largest Django firm. He’s worked on many client projects over the years. He’s also applied his experience as a former public school teacher to teach Girl Develop It Python classes and as an adjunct lecturer at the University of North Carolina-Chapel Hill. I think you'll really enjoy working with and learning from Caleb. He's a wonderful person.

For the past six months we've been working very hard to launch the school. A large amount of our time has been spent on a application to receive our license from the State of North Carolina to conduct a proprietary school. As of today Astro is one of two code schools in North Carolina that have received this license. We found it a very important task to undertake. It helped us do our due diligence to run a honest and fair school that will protect the rights of students who will be attending Astro Code School. This long process also explains why we've waited to tell you all the details. We're required to wait till we have a license to open our application process.

Thanks for checking out Astro Code School. If you have any questions please contact me.

Astro Code SchoolAnnouncing the Python & Django Web Engineering Class

Code

I'm really happy to officially announce the first Python and Django Web Engineering class at Astro Code School. I’ll outline some details here and you can also find them on our classes page.

This class is twelve weeks long and full time Monday to Friday from 9 AM – 5 PM. It'll be taught here at the Astro Code School at 108 Morris Street, Suite 1b, Durham, NC. We will conduct two Python and Django Web Engineering classes in 2015. The first one in term two starts May 18, 2015 and ends August 10, 2015. The second one in term three starts September 22, 2015 and ends December 15, 2015.

Enrollment for both sections opens today March 20. There is space for twelve students in each class. More information about the enrollment process is on our Apply page. Part of that process is an entrance exam that is designed to ensure you're ready to succeed. The price per person for Python and Django Web Engineering is $12,000.

The Python and Django Web Engineering class is intended for intermediate level students. Its goal is to help you start your career as a backend web engineer. To start down this path we recommend you prepare yourself. A few things you can do are: read some books on Python & Django, complete the Django Girls tutorial, watch videos on Youtube, and take an online class or two in Python.

Python and Django make a powerful team to build maintainable web applications quickly. When you take this course you will build your own web application during lab time with assistance from your teacher and professional Django developers. You’ll also receive help preparing your portfolio and resume to find a job using the skills you’ve learned.

Here's the syllabus:

  1. Python Basics, Git & GitHub, Unit Testing
  2. Object Oriented Programming, Functional Programming, Development Process, Command Line
  3. HTML, HTTP, CSS, LESS, JavaScript, DOM
  4. Portfolio Development, Intro to Django, Routing, Views, Templates
  5. SQL, Models, Migrations, Forms, Debugging
  6. Django Admin, Integrating Apps, Upgrading Django, Advanced Django
  7. Ajax, JavaScript, REST
  8. Linux Admin, AWS, Django Deployment, Fabric
  9. Interviewing Skills, Computer Science Topics, Review
  10. Final Project Labs
  11. Final Project Labs
  12. Final Project Labs

This comprehensive course is taught by experienced developer and trained teacher Caleb Smith. He's been working full time at Caktus Consulting Group, the founders of Astro Code School and the nation’s largest Django firm. He’s worked on many client projects over the years. He’s also applied his experience as a former public school teacher to teach Girl Develop It Python classes and as an adjunct lecturer at the University of North Carolina-Chapel Hill. I think you'll really enjoy working with and learning from Caleb. He's a wonderful person.

For the past six months we've been working very hard to launch the school. A large amount of our time has been spent on a application to receive our license from the State of North Carolina to conduct a proprietary school. As of today Astro is one of two code schools in North Carolina that have received this license. We found it a very important task to undertake. It helped us do our due diligence to run a honest and fair school that will protect the rights of students who will be attending Astro Code School. This long process also explains why we've waited to tell you all the details. We're required to wait till we have a license to open our application process.

Thanks for checking out Astro Code School. If you have any questions please contact me.

Caktus GroupWhy RapidSMS for SMS Application Development

Caktus has been involved in quite a few projects (Libyan voter registration, UNICEF Project Mwana, and several others) that include text messaging (a.k.a. Short Message Service, or SMS), and we always use RapidSMS as one of our tools. We've also invested our own resources in supporting and extending RapidSMS.

There are other options; why do we consistently choose RapidSMS?

What is RapidSMS

First, what is RapidSMS? It's an open source package of useful tools that extend the Django web development framework to support processing text messages. It includes:

  • A framework for writing code to be invoked when a text message is received and respond to it
  • A set of backends - pluggable code modules that can interface to various ways of connecting your Django program to the phone network to pass text messages back and forth
  • Sample applications
  • Documentation

The backends are required because unlike email, there's no universal standard for sending and receiving text messages over the Internet. Often we get access to the messages via a third party vendor, like Twilio or Tropo, that provides a proprietary interface. RapidSMS isolates us from the differences among vendors.

RapidSMS is open source, under the BSD license, with UNICEF acting as holder of the contributors' agreements (granting a license for RapidSMS to use and distribute their contributions). See the RapidSMS license for more about this.

Alternatives

Here are some of the alternatives we might have chosen:

  • Writing from scratch: starting each project new and building the infrastructure to handle text messages again
  • Writing to a particular vendor's API: writing code that sends and receives text messages using the programming interface provided by one of the online vendors that provide that service, then building applications around that
  • Other frameworks

Why RapidSMS

Why did we choose RapidSMS?

  • RapidSMS builds on Django, our favorite web development framework.
  • RapidSMS is at the right level for us. It provides components that we can use to build our own applications the way we need to, and the flexibility to customize its behavior.
  • RapidSMS is open source, under the BSD license. There are no issues with our use of it, and we are free to extend it when we need to for a particular project. We then have the opportunity to contribute our changes back to the RapidSMS community.
  • RapidSMS is vendor-neutral. We can build our applications without being tied to any particular vendor of text messaging services. That's good for multiple reasons:
  • We don't have to pick a vendor before we can start.
  • We could change vendors in the future without having to rewrite the applications.
  • We can deploy applications to different countries that might not have any common vendor for messaging services.

It's worth noting that using RapidSMS doesn't even require using an Internet text messaging vendor. We can use other open source applications like Vumi or Kannel as a gateway to provide us with even more options:

  • use hardware called a "cellular/GSM modem" (basically a cell phone with a connection to a computer instead of a screen)
  • interface directly to a phone company's own servers over the Internet, using several widely used protocols

Summary

RapidSMS is a good fit for us at Caktus, it adds a lot to our projects, and we've been pleased to be able to contribute back to it.

Caktus will be leading a workshop on building RapidSMS applications during PyCon 2015 on Tuesday, April 7th 3:00-5:30.

Tim HopperSundry Links for March 13, 2015

Dynamically Update a Plot in IPython: One thing I miss about Mathematica is Animate and Manipulate. IPython is slowing getting similar functionality. Here’s how to dynamically update a plot.

Jiahao's IPython Notebook customizations: Drop this CSS file on your machine, and suddenly your IPython notebooks look quite beautiful!

Duet Display: I tried Air Display a few years ago, and it wasn’t worth the hassle. But Duet Display is a fantastic way to turn your iPad into an external display.

Creating publication-quality figures with Matplotlib: Plotting in Python frustrates me to no end. But here’s a nice tutorial on creating nice figures in with Matplotlib.

retrying 1.3.3 : Python Package Index: Python decorators "to simplify the task of adding retry behavior to just about anything." These work like a charm!

Astro Code SchoolMeet Brian Russell Our Director

This is the second in a series of interviews about the people at Astro Code School. This one is about Brian Russell the Astro Code School Director. He's the guy who does day to day management and works to tell the world about the school.

Where were you born?
I was born in Richmond, Virginia.

What was your favorite childhood pastime?
Drawing with a pencil on paper.

Where did you go to college and what did you study?
I went to Virginia Commonwealth University and earned a Bachelors of Fine Arts in Sculpture and Painting. The majority of my studio work involved creating video installation art. This was done using early non-linear video editing software to create short movies. Those movies were then displayed in sculpture that involved performance art and dance choreography.

How did you go from being a fine artist to a director of a school?
It's a rather long and winding road. But after college I immediately started working as a graphic designer. This led to web design and development work. Later I started a business called Carrboro Creative Coworking and cut my teeth running a business. During that time I worked for many different corporations and several Universities doing tech support and teaching multimedia software. Technical literacy education is a real thread of interest in my career.

What is one of your favorite things about Python?
I've really like Python's readability and how approachable that makes it. Plus the people I've met in the Python community locally and internationally are really cool.

Who are your mentors and how have they influenced you?
Besides my art professors I've learned a lot from my accountant. Seriously, she's awesome! :)

Do you have any hobbies?
I am an avid film photographer. Right now I'm deep into medium format film.

Which is your favorite Sci-fi or Fantasy fiction? Why?
Science Fiction hands down. I've been a sci-fi geek ever sense I saw the "first" Star Wars in the theater. Plus I love William Gibson's writing and Star Trek. LLAP!

Caktus GroupCaktus is Durham Living Wage Certified

Caktus Group recently became a Durham Living Wage Certified Employer! The Durham Living Wage Certification Programis a project of the Durham People’s Alliance. The group asks that local businesses voluntarily certify themselves as living wage employers in order to identify, acknowledge, and celebrate those businesses. A living wage is the amount of income needed for an individual to meet her or his basic needs without public or private assistance. Caktus is proud to be a part of efforts to build a just and sustainable local economy.

You can read more about the program and the requirements for certification at DurhamLivingWage.org, or you can apply online to get your business certified.

Josh JohnsonDevOps Is Bullshit: Why One Programmer Doesn’t Do It Anymore

I’ve always been handy with hardware. I was one of “those kids” you hear about that keeps taking things apart just to see how they work – and driving their parents nuts in the process. When I was a teenager, I toyed with programming but didn’t get serious with it until I decided I wanted to get into graphic design. I found out that you don’t have to write HTML yourself, you can use programming to do it for you!

But I never stopped tinkering with hardware and systems. I used Linux and BSD on my desktop for years, built my LAMP stacks from source, and simulated the server environment when I couldn’t – when I used windows for work, and when I eventually adopted Apple as my primary platform, I first started with cross-compiled versions of the components, and eventually got into virtualization.

In the early days (maybe 10 years ago) there seemed to be few programmers who were like me, or if they were, they never took “operations” or “sysadmin” jobs, and neither did I. So there was always a natural divide. Aside from being a really nice guy who everyone likes, I had a particular rapport with my cohorts who specialized in systems.

I’m not sure exactly what it was. It may have been that I was always interested in the finer details of how a system works. It may have been my tendency to document things meticulously, or my interest in automation and risk reduction. It could have just been that I was willing to take the time to cross the divide and talk to them, even when I didn’t need something. It may have just boiled down to the fact that when they were busy, I could do things myself, and I wanted to follow their standards, and get their guidance. It’s hard to tell, even today, as my systems skills have developed beyond what they ever were before, but the rapport has continued on.

And then something happened. As my career progressed, I took on more responsibilities and did more and more systems work. This was partly because of the divide widening to some extent at one particular job, but mostly because, I could. Right around this time the “DevOps Revolution” was beginning.

Much like when I was a teenager and everyone needed a web site, suddenly everyone needed DevOps.

I didn’t really know what it was. I was aware of the term, but being a smart person, I tend to ignore radical claims of great cultural shifts, especially in technology. In this stance, I find myself feeling a step or two behind at times, but it helps keep things in perspective. Over time, technology changes, but true radicalism is rare. Most often, a reinvention or revisiting of past ideas forms the basis for such claims. This “DevOps” thing was no different. Honestly, at the time it seemed like a smoke screen; a flashy way to save money for startups.

I got sick of tending systems – when you’re doing it properly, it can be a daunting task. Dealing with storage, access control, backups, networking, high availability, maintenance, security, and all of the domain-specific aspects can easily become overwhelming. But worse, I was doing too much front-line support, which honestly, at the time was more important than the programming it was distracting me from. I love my users, and I see their success as my success. I didn’t mind it, but the bigger problems I wanted to solve were consistently being held above my head, just out of my grasp. I could ignore my users or ignore my passion, and that was a saddening conundrum. I felt like all of the creativity I craved was gone, and I was being paid too much (or too little depending on if you think I was an over paid junior sysadmin or an under paid IT manager with no authority) to work under such tedium. So I changed jobs.

I made the mistake of letting my new employer decide where they wanted me to go in the engineering organization.

What I didn’t know about this new company was that it had been under some cultural transition just prior to bringing me on board. Part of that culture shift was incorporating so-called “DevOps” into the mix. By fiat or force.

Because of my systems experience, I landed on the front line of that fight: the “DevOps Team”. I wasn’t happy.

But as I dug in, I saw some potential. We had the chance to really shore up the development practices, reduce risk in deployments, make the company more agile, and ultimately make more money.

We had edicts to make things happen, under the assumption that if we built it, the developers would embrace it. These things included continuous integration, migrating from subversion to git, building and maintaining code review tools, and maintaining the issue tracking system. We had other, less explicit responsibilities that became central to our work later on, including developer support, release management, and interfacing with the separate, segregated infrastructure department. This interaction was especially important, since we had no systems of our own, and we weren’t allowed to administer any machines. We didn’t have privileged access to any of the systems we needed to maintain for a long time.

With all the hand wringing and flashing of this “DevOps” term, I dug in and read about it, and what all the hubub was about. I then realized something. What we were doing wasn’t DevOps.

Then I realized something else. I was DevOps. I always had been. The culture was baked into the kind of developer I was. Putting me, and other devs with similar culture on a separate team, whether that was the “DevOps” team or the infrastructure team was a fundamental mistake.

The developers didn’t come around. At one point someone told a teammate of mine that they thought we were “IT support”. What needed to happen was the developers had to embrace the concept that they were capable of doing at least some systems things themselves, in safe and secure manner, and the infrastructure team had to let them do it. But my team just sat there in the middle, doing what we could to keep the lights on and get the code out, but ultimately just wasting our time. Some developers starting using AWS, with the promise of it being a temporary solution, but in a vacuum nonetheless. We were not having the impact that management wanted us to have.

My time at this particular company ended in a coup of sorts. This story is worthy of a separate blog post some day when it hurts a little less to think about. But lets just say I was on the wrong side of the revolution and left as quickly as I could when it was over.

In my haste, I took another “DevOps” job. My manager there assured me that it would be a programming job first, and a systems job second. “We need more ‘dev’ in our ‘devops'”, he told me.

What happened was very similar to my previous “DevOps” experience, but more acute. Code, and often requirements, were thrown over the wall at the last minute. As it fell in our laps, we scrambled to make it work, and work properly, as it seemed no one would think of things like fail over or backups or protecting private information when they were making their plans. Plans made long ago, far away, and without our help.

This particular team was more automation focused. We had two people who were more “dev” than “ops”, and the operations people were no slouches when it came to scripting or coding in their own right.

It was a perfect blend, and as a team we got along great and pulled off some miracles.

But ultimately, we were still isolated. We, and our managers tried to bridge the gap to no avail. Developers, frustrated with our sizable backlog, went over our heads to get access to our infrastructure and started doing it for themselves, often with little or no regard for our policies or practice. We would be tasked with cleaning up their mess when it was time for production deployment – typically in a major hurry after the deadline had passed.

The original team eventually evaporated. I was one of the last to leave, as new folks were brought into a remote office. I stuck it out for a lot of reasons: I was promised transfer to NYC, I had good healthcare, I loved my team. But ultimately what made me stick around was the hope, that kept getting rebuilt and dashed as management rotated in and out above me, that we could make it work.

I took the avenue of providing automated tools to let the developers have freedom to do as they pleased, yet we could ensure they were complying with company security guidelines and adhering to sane operations practices.

Sadly, politics and priorities kept my vision from coming to reality. It’s OK, in hindsight, because so much more was broken about so-called “DevOps” at this particular company. I honestly don’t think that it could have made that much of a difference.

Near the end of my tenure there, I tried to help some of the developers help themselves by sitting with them and working out how to deploy their code properly side-by-side. It was a great collaboration, but it fell short. It represented a tiny fraction of the developers we supported. Even with those really great developers finally interfacing with my team, it was too little, too late.

Another lesson learned: you can’t force cultural change. It has to start from the bottom up, and it needs breathing room to grow.

I had one final “DevOps” experience before I put my foot down and made the personal declaration that “DevOps is bullshit”, and I wasn’t going to do it anymore.

Due to the titles I had taken, and the experiences of the last couple of years, I found myself in a predicament. I was seen by recruiters as a “DevOps guy” and not as a programmer. It didn’t matter that I had 15 years of programming experience in several languages, or that I had focused on programming even in these so-called “DevOps” jobs. All that mattered was that, as a “DevOps Engineer” I could be easily packaged for a high-demand market.

I went along with the type casting for a couple of reasons. First, as I came to realize, I am DevOps – if anyone was going to come into a company and bridge the gap between operations and engineering, it’d be me. Even if the company had a divide, which every company I interviewed with had, I might be able to come on board and change things.

But there was a problem. At least at the companies I interviewed at, it seemed that “DevOps” really meant “operations and automation” (or more literally “AWS and configuration management”). The effect this had was devastating. The somewhat superficial nature of parts of my systems experience got in the way of landing some jobs I would have been great at. I was asked questions about things that had never been a problem for me in 15 years of building software and systems to support it, and being unable to answer, but happy to talk through the problem, would always end in a net loss.

When I would interview at the few programming jobs I could find or the recruiters would give me, they were never for languages I knew well. And even when they were, my lack of computer science jargon bit me – hard. I am an extremely capable software engineer, someone who learns quickly and hones skills with great agility. My expertise is practical, however, and it seemed that the questions that needed to be asked, that would have illustrated my skill, weren’t. I think to them, I looked like a guy who was sick of systems that was playing up their past dabbling in software to change careers.

So it seemed “DevOps”, this great revolution, and something that was baked into my very identity as a programmer, had left me in the dust.

I took one final “DevOps” job before I gave up. I was optimistic, since the company was growing fast and I liked everyone I met there. Sadly, it had the same separations, and was subject to the same problems. The developers, who I deeply respected, were doing their own thing, in a vacuum. My team was unnecessarily complicating everything and wasting huge amounts of time. Again, it was just “ops with automation” and nothing more.

So now lets get to the point of all of this. We understand why I might think “DevOps is bullshit”, and why I might not want to do it anymore. But what does that really mean? How can my experiences help you, as a developer, as an operations person, or as a company with issues they feel “DevOps” could address?

Don’t do DevOps. It’s that simple. Apply the practices and technology that comprise what DevOps is to your development process, and stop putting up walls between different specialties.

A very wise man once said “If you have a DevOps team, you’re doing it wrong“. If you start doing that, stop it.

There is some nuance here, and my experience can help save you some trouble by identifying some of the common mistakes:

  • DevOps doesn’t make specialists obsolete.
  • Developers can learn systems and operations, but nothing beats experience.
  • Operations people can learn development too, but again, nothing beats experience.
  • Operations and development have historically be separated for a reason – there are compromises you must make if you integrate the two.
  • Tools and automation are not enough.
  • Developers have to want DevOps. Operations have to want DevOps. At the same time.
  • Using “DevOps” to save money by reducing staff will blow up in your face.
  • You can’t have DevOps and still have separate operations and development teams. Period.

Let me stop for one moment and share another lesson I’ve learned: if it ain’t broke, don’t fix it.

If you have a working organization that seems old fashioned, leave it alone. It’s possible to incorporate the tech, and even some of the cultural aspects of DevOps without radically changing how things work – it’s just not DevOps anymore, so don’t call it that. Be critical of your process and practices, kaizen and all that, but don’t sacrifice what works just to join the cargo cult. You will waste money, and you will destroy morale. The pragmatic operations approach is the happiest one.

Beware of geeks bearing gifts.

So lets say you know why you want DevOps, and you’re certain that the cultural shift is what’s right for your organization. Everyone is excited about it. What might a proper “DevOps” team look like?

I can speak to this, because I currently work in one.

First, never call it “DevOps”. It’s just what you do as part of your job. Some days you’re writing code, other days you’re doing a deployment, or maintenance. Everyone shares all of those responsibilities equally.

People still have areas of experience and expertise. This isn’t pushing people into a luke-warm, mediocre dilution of their skills – this is passionate people doing what they love. It’s just that part of that, is launching a server or writing a chef recipe or debugging a production issue.

As such you get a truly cross functional team. Where expertise differs, first, there’s a level of respect and trust. So if someone knows more about a topic than someone else, they will likely be the authority on it. The rest of the team trusts them to steer the group in the right direction.

This means that you can hire operations people to join your team. Just don’t give them exclusive responsibility for what they’re best at – integrate them. The same goes for any “non deveoper” skillset, be that design, project managment or whatever.

Beyond that, everyone on the team has a thirst to develop new skills and look at their work in different ways. This is when the difference in expertise provides an opportunity to teach. Teaching brings us closer together and helps us all gain better understanding of what we’re doing.

So that’s what DevOps really is. You take a bunch of really skilled, passionate, talented people who don’t have their heads shoved so far up their own asses that they can take the time to learn new things. People who see the success of the business as a combined responsibility that is eqully shared. “That’s not my job” is not something they are prone to saying, but they’re happy to delegate or share a task if need be. You give them the infrastructure, and time (and encouragement doesn’t hurt), to build things in a way that makes the most sense for their productivity, and the business, embracing that equal, shared sense of responsibility. Things like continuous integration and zero-downtime deployments just happen as a function of smart, passionate people working toward a shared goal.

It’s an organic, culture-driven process. We may start doing continuous deployment, or utlize “the cloud” or treat our “code as infrastructure” but only if it makes sense. The developers are the operations people and the operations people are the developers. An application system is seen in a holistic manner and developed as a single unit. No one is compromising, we all get better as we all just fucking do it.

DevOps is indeed bullshit. What matters is good people working together without artificial boundaries. Tech is tech. It’s not possible for everyone to share like this, but when it works, it’s amazing – but is it really DevOps? I don’t know, I don’t do that anymore.


Astro Code SchoolMeet Caleb Smith Our Lead Instructor

This is the first in a series of interviews about the people at Astro Code School. This one is about Caleb Smith the Astro Code School Lead Instructor. He’s the guy who writes our curriculum for our Python & Django Web Engineering class that he’s teaching this year.

Where were you born?
I grew up in Hickory, in the piedmont of North Carolina.

What was your favorite childhood pastime?
Programming DOS games in BASIC. I spent far too much time working on making an RPG I called "Water and Stone".

Where did you go to college and what did you study?
I studied Music Education at UNC-Greensboro.

How did you get into Web Development with Python and Django?
After about two years of learning C++ and front-end web development on my own, I moved to the Triangle area hoping to find a role in the tech sector. I applied for the Caktus summer internship and was able to ramp up quickly thanks to some excellent mentorship from the team. I was hired on as a junior developer after that as my first professional job doing web development.

What did you do professionally before becoming a web developer?
I taught elementary music K-5 in Asheville, North Carolina for two years. I found public school teaching really rewarding but difficult. I spent a lot of my free time doing hobby programming until deciding to pursue programming professionally.

What is one of your favorite things about Python? What about Django?
I like the readability of Python the most and I also appreciate that it is well designed but practical considerations are allowed to trump purity. It makes for a really nice language and system to work in. I like that Django makes so many details of web development irrelevant because it abstracts over them well and is also careful about correctness and security concerns.

Who are your teaching mentors and how did they influence how you teach?
I learned the most from Dr. Randy Kohlenburg, my trombone teacher at UNC-Greensboro. Dr. Kohlenburg thinks a lot about pedagogy and taught us a lot about how to apply those ideas in our own teaching. He's the best mentor I've ever had.

Is there a connection between music and computer programming for you?
When I was about 12 I became really interested in music, joined band, and pursued music education in college. While taking music theory courses, especially in post-tonal analysis, I thought of ways to automate the work involved. I wrote some simple BASIC programs to help double check my work. I rewrote this later in C++ and yet again in Python, which I eventually released as the sator library on PyPI. Through this work, I realized that I had a strong interest in programming that went beyond my initial interest of making games as a kid.

Do you have any hobbies?
I still play trombone and guitar when I can find the time. I've recently been trying to pick up khoomei. (Editor: A type of throat singing.) I'd like to eventually do something with programming and electronic music.

Which is your favorite Sci-fi or Fantasy fiction? Why?
Sci-fi. "Dune" and "Neuromancer" are two of my all-time favorites, and some recent works like "Leviathan Wakes" are great reads too. Good science fiction captures my imagination of where society might be leading in ways that fantasy doesn't, but I do like it too.

Joe GregorioSix Places

One of the questions that comes up regularly when talking about zero frameworks is how can you expect to stitch together an application without a framework? The short answer is "the same way you stitch together native elements," but I think it's interesting and instructional to look at those ways of stitching elements together individually.

There are six surfaces, or points of contact, between elements, that you can use when stitching elements together, whether they are native or custom elements.

Before we go further a couple notes on terminology and scope. For scope, realize that we are only talking about DOM, we aren't talking about composing JS modules or strategies for composing CSS. For the terminology clarification, when talking about DOM I'm referring to the DOM Interface for an element, not the element markup. Note that there is a subtle difference between the markup element and the DOM Interface to such an element.

For example, <img data-foo="5" src="https://example.com/image.png"/> may be the markup for an image. The corresponding DOM Interface has an attribute of src with a value of "https://example.com/image.png", but the corresponding DOM Interface doesn't have a "data-foo" attribute, instead all data-* attributes are available via the dataset attribute on the DOM Interface. In the terminology of the WhatWG Living Standard, this is the distinction between content attributes vs IDL attributes, and I'll only be referring to IDL attributes. So with the preliminaries out of the way let's get into the six surfaces that can be used to stitch together an application.

Attributes and Methods

The first two surfaces, and probably the most obvious, are attributes and methods. If you are interacting with an element it's usually either reading and writing attribute values:

element.children

or calling element methods:

document.querySelector('#foo');

Technically these are the same thing, as they are both just properties with different types. Native elements have their set of defined attributes and methods, and depending on which element a custom element is derived from it will also have that base element's attributes and methods along with the custom ones it defines.

Events

The next two surface are events. Events are actually two surfaces because an element can listen for events,

ele.addEventListener(‘some-event’, function(e) { /* */ });

and an element can dispatch its own events:

var e = new CustomEvent(‘some-event’, {details: details});
this.dispatchEvent(e);

DOM Position

The final two surfaces are position in the DOM tree, and again I'm counting this as two surfaces because each element has a parent and can be a parent to another element. Yeah, an element has siblings too, but that would bring the total count of surfaces to seven and ruin my nice round even six.

<button>
  <img src="">
</button>

Combinations are powerful

Let's look at a relatively simple but powerful example, the 'sort-stuff' element. This is a custom element that allows the user to sort elements. All children of 'sort-stuff' with an attribute of 'data-key' are used for sorting the children of the element pointed to by the sort-stuff's 'target' attribute. See below for an example usage:

<sort-stuff target="#sortable">
   <button data-key=one>Sort on One</button>
   <button data-key=two>Sort on Two</button>
 </sort-stuff>
 <ul id=sortable>
   <li data-one=c data-two=x>Item 3</li>
   <li data-one=a data-two=z>Item 1</li>
   <li data-one=d data-two=w>Item 4</li>
   <li data-one=b data-two=y>Item 2</li>
   <li data-one=e data-two=v>Item 5</li>
 </ul>

If the user presses the "Sort on One" button then the children of #sortable are sorted in alphabetical order of their data-one attributes. If the user presses the "Sort on Two" button then the children of #sortable are sorted in alphabetical order of their data-two attributes.

Here is the definition of the 'sort-stuff' element:

    function Q(query) {
      return Array.prototype.map.call(
        document.querySelectorAll(query),
          function(e) { return e; });
    }

    var SortStuffProto = Object.create(HTMLElement.prototype);

    SortStuffProto.createdCallback = function() {
      Q('[data-key]').forEach(function(ele) {
        ele.addEventListener('click', this.clickHandler.bind(this));
      }.bind(this));
    };

    SortStuffProto.clickHandler = function(e) {
      var target = Q(this.getAttribute('target'))[0];
      var elements = [];
      var children = target.children;
      for (var i=0; i<children.length; i++) {
        var ele = children[i];
        var value = ele.dataset[e.target.dataset.key];
        elements.push({
          value: value,
          node: ele
        });
      }
      elements.sort(function(x, y) {
        return (x.value == y.value ? 0 : (x.value > y.value ? 1 : -1));
      });
      elements.forEach(function(i) {
        target.appendChild(i.node);
      });
    };

    document.registerElement('sort-stuff', {prototype: SortStuffProto});

And here is a running example of the code above:

  • Item 3
  • Item 1
  • Item 4
  • Item 2
  • Item 5

Note the surfaces that were used in constructing this functionality:

  1. sort-stuff has an attribute 'target' that selects the element to sort.
  2. The target children have data attributes that elements are sorted on.
  3. sort-stuff registers for 'click' events from its children.
  4. sort-stuff children have data attributes that determine how the target children will be sorted.

In addition you could imagine adding a custom event 'sorted' that 'sort-stuff' could generate each time it sorts.

So there's your six surfaces that you can use when composing elements into your application. And why the insistence on making the number of surfaces equal six? Because while history may not repeat itself, it does rhyme.

Astro Code SchoolSeven Features an Introductory Programming Language Should Have

Python Logo

Python has recently supplanted Java as the most popular introductory teaching language at top U.S. universities. There are many articles covering this fact from the perspectives of computer science faculty at major universities. I wanted to take a moment to add my own thoughts on the subject.

There are several key features of Python that make it more suitable as an introductory language compared to Java:

  1. A more gradual learning curve

  2. Object-oriented programming is not required

  3. Designed for readability

  4. Less verbosity and boilerplate

  5. An interactive shell for exploratory development

I'd like to tease out each of these points. While both languages can be used to write large and complicated programs, the path from an empty directory to a simple and working program is much more straightforward in Python.

Programmers with little experience can use Python to do simple tasks such as web scraping within a few days or weeks of using the language. There are advanced concepts to learn, but the learning curve is more gradual because more can be accomplished in Python with only simpler, more foundational concepts such as variables and control flow.

Courses that use Java as the teaching language focus heavily on object-oriented programming (or "OOP"). While Python is also object-oriented, it is a multi-paradigm language that can also be used with the functional or structured programming paradigms. While it is important to learn OOP eventually, many learners catch on more quickly to the more concrete structured programming paradigm. It is my view that learning about OOP in the level of detail needed to write a Java program, before completing several small programs that work, is a pedagogical mistake that fundamentally puts these steps out of sequence. i.e. A learner should first write small programs before approaching the techniques and concepts used for writing larger ones well.

Furthermore, Python was designed with readability in mind, and is known for mimicing pseudo-code more closely than other programming languages. Learning the keywords and syntax needed for Java programming obfuscates the overall goal of an introductory course; to teach fundamental programming concepts that surpass a given language or problem domain and enable the learner to obtain key insights that will continue to serve them as they learn more computer science and software engineering concepts.

Lastly, like many other languages, Python features an interactive shell that allows the programmer to try small bits of code at a time and to explore the program being developed from within. This shortens the feedback loop of trying out new ideas compared to having separate compile and run steps. The advantages of an iterative approach with a quick feedback loop for a beginner should be obvious.

With all of this in mind, I find it hard to imagine how Java ever became a common introductory language at all. Explanations for this usually center around the importance of object-oriented programming and the ubiquity of Java in the industry. While these are both good reasons to learn Java, possibly even as a second language, they are far from convincing for the purpose of introducing programming.

I would argue that the features of an introductory programming language are:

  1. A shallow learning curve

  2. A clear and consistent language design

  3. Many libraries available for a variety of needs

  4. An interactive environment such as a shell

  5. Light on clutter, boilerplate or superfluous details

  6. An obvious path toward creating small and simple programs

  7. A rapid rate of development

I've outlined how Python meets most of these points already. On the point regarding libraries, in this regard I think Java and Python both feature a rich ecosystem for beginners and experienced programmers alike. However, considering all of these points, I think a number of languages are more appropriate as an introductory teaching language than Java, including at least the following:

  1. JavaScript

  2. Ruby

  3. Scheme

This brings to mind a much more interesting and difficult question. What makes Python a more appropriate first language than each of these? I'll leave this to a future blog post because I think it needs careful and long form comparisons.

I hope to have made clear why I'm glad that major universities are making the shift to Python for introductory courses. In the future, I hope to broaden this argument and describe how Python is the best first language to learn.

Joe GregoriogRPC

Today Google launched gRPC, a new HTTP/2 and Protocol Buffer based system for building APIs. This is Google's third system for web APIs.

The first system was Google Data, which was based on the Atom Publishing Protocol [RFC 5023]. It was an XML protocol over HTTP. The serving system for that grew, but started to hit scalability issues at around 50 APIs. The scaling issues weren't in the realm of serving QPS, but more in the management of that many APIs, such as rolling out new features across all APIs and all clients.

Those growing pains and lessons learned led to the next generation of APIs that launched in 2010. In addition to writing a whole new serving infrastructure to make launching APIs easier, it was also a time to shed XML and build the protocol on JSON. This Google I/O video contains good overview of the system:

Now, five years later, a third generation API system has been developed, and the team took the opportunity to make another leap, moving to HTTP/2 and Protocol Buffers. This is the first web API system from Google that I haven't been involved in, but I'm glad to see them continuing to push the envelope on web APIs.

Caktus GroupPyCon 2015 Ticket Giveaway

Caktus is giving away a PyCon 2015 ticket, valued at $350. We love going to PyCon every year. It’s the largest gathering of developers using Python, the open source programming language that Caktus relies on. This year, it’ll be held April 8th-16th at the beautiful Palais des congrès de Montréal (the inspiration we used to design the website).

To enter, follow @caktusgroup on Twitter and RT this message.

The giveaway will end Tuesday, March 3rd at 12pm EST. Winner will be notified via Twitter DM. A response via DM is required within 24 hours or entrant forfeits their ticket. Caktus employees are not eligible. Winning entrant must be 18 years of age or older. Ticket is non-transferable.

Bonne chance!

Astro Code SchoolPython & Django Web Engineering Class Syllabus

Our first Python & Django Web Engineering class is fast approaching. More information is forthcoming with a big update to the website. Until then here is a sneak peek at the syllabus. Join our email list to find out the latest info first.

Python & Django Web Engineering 2015

  1. Python Basics, Git & GitHub, Unit Testing

  2. Object Oriented Programming, Functional Programming, Development Process, Command Line

  3. HTML, HTTP, CSS, LESS, JavaScript, DOM

  4. Portfolio Development, Intro to Django, Routing, Views, Templates

  5. SQL, Models, Migrations, Forms, Debugging

  6. Django Admin, Integrating Apps, Upgrading Django, Advanced Django

  7. Ajax, JavaScript, REST

  8. Linux Admin, AWS, Django Deployment, Fabric

  9. Interviewing Skills, Computer Science Topics, Review

  10. Final Project Labs

  11. Final Project Labs

  12. Final Project Labs

Frank WierzbickiJython 2.7 beta4 released!

[Update: some of the download links where wrong, they should now be correct. Sorry for the mistake!] On behalf of the Jython development team, I'm pleased to announce that the fourth beta of Jython 2.7 is available. I'd like to thank Amobee for sponsoring my work on Jython. I'd also like to thank the many contributors to Jython.

Jython 2.7b4 brings us up to language level compatibility with the 2.7 version of CPython. We have focused largely on CPython compatibility, and so this release of Jython can run more pure Python apps then any previous release. Please see the NEWS file for detailed release notes. This release of Jython requires JDK 7 or above.

Jim Baker put together a great summary of the recent work for beta4.
As a beta release we are concentrating on bug fixing and stabilization for a
production release.

This release is being hosted at maven central. The traditional installer can be found here. See the installation instructions for using the installer. Two other versions are available:
To see all of the files available including checksums, go here and navigate to the appropriate distribution and version.

Caktus GroupTriangle Open Data Day and Code Across

International Open Data Day is this weekend, February 21st and 22nd. As part of the festivities, Code for America is hosting its 4th annual CodeAcross. The event aims to unite developers across the country for a day of civic coding, creating tools that make government services simple, effective, easy to use. Put simply, “the goal of CodeAcross is to activate the Code for America network and inspire residents everywhere to get actively involved in their community.”

Technology Tank is hosting CodeAcross as part of Triangle Open Data Day, a chance for people living in the Triangle to come together to learn about open data and hacking for civic good. This year will involve a civic hackathon with something for everyone, from novice to expert coders alike.

Not only is Caktus an official bronze sponsor of this year’s Triangle Open Data Day, but CTO Colin Copeland is also the founder and co-captain of Code for Durham, the Durham chapter of Code for America.

Not sure whether you should attend? Don’t worry, Code for America has provided a helpful flowchart to help you decide. We’ll hope to see you there!

Josh JohnsonClojure + Boot Backend For The React.js Tutorial

Last weekend I worked my way through the introductory tutorial for React.js. It’s very well written and easy to follow, I was really happy with it overall.

For the uninitiated, React.js is a framework that provides a means to create reusable javascript components that emit HTML in a very intuitive way. Taking that concept a step further, its possible to use React on the backend, utilizing the same components to build the UI that is served to the user initially. The end result is very interesting. React prescribes an intuitive and scalable approach to building complex, dynamic user interfaces as highly reusable components.

These user interfaces avoid the redundancy of generating and manipulating HTML twice – once on the server, and again in the browser.

The server-side rendering feels like a natural pattern in a Node.js environment, but there are examples in the wild of doing server-side rendering with other platforms, most notably clojure. This is exciting stuff.

React has been around for a while, but this is the first time I’ve taken a close look at it.

The tutorial focuses on building a simple font-end application rendered entirely in the browser. Initially, you work with a standalone HTML page, and near the end, you integrate it with a simple web application.

The source repository for the tutorial provides some example applications written in Python, Ruby and Node.js.

A simple application like this seemed like an ideal use case for a simple boot script, so I decided to write one of my own. Here’s the code inline, but I’ve forked the repository if you’d like to examine the code along-side its cohorts.

#!/usr/bin/env boot
 
(set-env! 
  :dependencies 
  #(into % '[[org.clojure/data.json "0.2.5"]
             [ring/ring-core "1.3.2"]
             [ring/ring-jetty-adapter "1.3.2"]]))

(require '[ring.adapter.jetty     :as jetty]
         '[clojure.data.json      :as json]
         '[ring.middleware.params :refer [wrap-params]]
         '[ring.util.response     :refer [file-response response]])

(defn static
  [request]
  "Handle static file delivery"
  (let [uri (:uri request)
        path (str "./public" uri)]
    (if (= uri "/comments.json")
      (file-response "./_comments.json")
      (file-response path))))

(defn save-comments
  [request]
  "Save the comments to the json file, and return the new data"
  (let [data (json/read-str (slurp "./_comments.json"))
        input (:form-params request)
        out (concat data [input])
        new-json (json/write-str out)]
    (spit "./_comments.json" new-json)
    (response new-json)))

(defn handler
  [request]
  "Simple handler that delegates based on the request type"
  (case (:request-method request)
    :post (save-comments request)
    :get (static request)))

(def app
  "Add middleware to the main handler"
  (wrap-params handler))

(defn -main
  [& args]
  (jetty/run-jetty app {:port 3000}))

Essentially, it sets up two handlers, and then a dispatcher that proxies between them depending on the type of request. If the request is a GET, a static file is assumed. This serves the html and any local dependencies. If the request is specifically for comments.json, the handler serves the _comments.json file.

If the request is a POST, its assumed that the body of the request contains a JSON-encoded comment to add. It deserializes that data and the _comments.json file, and appends the new comment to the list. The result is then saved to the filesystem.

Obviously, there is little in the way of error checking going on here. This tracks with the scope of the other example applications.

Note: It’s not clear to me exactly why they used _comments.json to store the data – in my initial prototype I named it comments.json and placed it with the other static files.

Interestingly, this boot script also serves as a minimalistic example of a web application using ring – including adding middleware.

This was a fun way to finish up a really informative tutorial – I’m excited to continue exploring what React.js can do, especially with Clojure!

Special thanks to alandipert and ul from #hoplon for code review and some great advice on cleaning up my initial implementation!


Caktus GroupAstro Code School Tapped to Teach App Development at UNC Journalism School

Our own Caleb Smith, Astro Code School lead instructor, is teaching this semester at UNC’s School of Journalism, one of the nation’s leading journalism schools. He’s sharing his enthusiasm for Django application development with undergraduate and graduate media students in a 500-level course, Advanced Interactive Development.

For additional details about the course and why UNC School of Journalism selected Caktus and Astro Code School, please see our press release.

Caktus GroupPyCon Blog Features Caktus Group

Brian Curtis, the director of the Python Software Foundation, recently interviewed and featured Caktus on the PyCon website. PyCon is the premiere event for those of us within the Python and Django open source communities. Brian writes about our work designing the PyCon 2015 website, our efforts in Libya, and what’s on the horizon in 2015. We're excited about this recognition!

Caktus GroupDjango Logging Configuration: How the Default Settings Interfere with Yours

My colleague Vinod recently found the answer on Stack Overflow to something that's been bugging me for a long time - why do my Django logging configurations so often not do what I think they should?

Short answer

If you want your logging configuration to behave sensibly, set LOGGING_CONFIG to None in your Django settings, and do the logging configuration from scratch using the Python APIs:

LOGGING_CONFIG = None
LOGGING = {...}  # whatever you want

import logging.config
logging.config.dictConfig(LOGGING)

Explanation

The kernel of the explanation is in this Stack Overflow answer by jcotton; kudos to jcotton for the answer: before processing your settings, Django establishes a default configuration for Python's logging system, but you can't override it the way you would think, because disable_existing_loggers doesn't work quite the way the Django documentation implies.

The Django documentation for disable_existing_loggers in 1.6, 1.7, and dev (as of January 8, 2015) says "If the disable_existing_loggers key in the LOGGING dictConfig is set to True (which is the default) the default configuration is completely overridden." (emphasis added)

That made me think that I could set disable_existing_loggers to True (or leave it out) and Django's previously established default configuration would have no effect.

Unfortunately, that's not what happens. The disable_existing_loggers flag only does literally what it says: it disables the existing loggers, which is different from deleting them. The result is that they stay in place, they don't log any messages, but they also don't propagate any messages to any other loggers that might otherwise have logged them, regardless of whether they're configured to do so.

What if you try the other option, and set disable_existing_loggers to False? Then your configuration is merged with the previous one (the default configuration that Django has already set up), without disabling the existing loggers. If you use Django's LOGGING setting with the default LOGGING_CONFIG, there is no setting that will simply replace Django's default configuration.

Because Django installs several django loggers, the result is that unless you happened to have specified your own configuration for each of them (replacing Django's default loggers), you have some hidden loggers possibly blocking what you expect to happen.

For example - when I wasn't sure what was going on in a Django project, sometimes I'd try just adding a root logger, to the console or to a file, so I could see everything. I didn't know that the default Django loggers were blocking most log messages from Django itself from ever reaching the root logger, and I would get very frustrated trying to see what was wrong with my logging configuration. In fact, my own logging configuration was probably fine; it was just being blocked by a hidden, overriding configuration I didn't know about.

We could work around the problem by carefully providing our own configuration for each logger included in the Django default logging configuration, but that's subject to breaking if the Django default configuration changes.

The most fool-proof solution is to disable Django's own log configuration mechanism by setting LOGGING_CONFIG to None, then setting the log configuration explicitly ourselves using the Python logging APIs. There's an example above.

The nitty-gritty

The Python documentation is more accurate: "disable_existing_loggers – If specified as False, loggers which exist when this call is made are left enabled. The default is True because this enables old behavior in a backward- compatible way. This behavior is to disable any existing loggers unless they or their ancestors are explicitly named in the logging configuration."

In other words, disable_existing_loggers does literally what it says: it leaves existing loggers in place, it just changes them to disabled.

Unfortunately, Python doesn't seem to document exactly what it means for a logger to be disabled, or even how to do it. The code seems to set a disabled attribute on the logger object. The effect is to stop the logger from calling any of its handlers on a log event. An additional effect of not calling any handlers is to also block propagation of the event to any parent loggers.

Status of the problem

There's been some recent discussion on the developers' list about at least improving the documentation, with a core developer offering to review anything submitted. And that's where things stand.

Footnotes