tag:blogger.com,1999:blog-80860529831842174402024-03-13T08:31:34.785-04:00Paul Stadiga tinkerer in the art and science of computing machinespaulhttp://www.blogger.com/profile/14647609048389725132noreply@blogger.comBlogger48125tag:blogger.com,1999:blog-8086052983184217440.post-15898272517952695542021-05-31T11:32:00.004-04:002021-05-31T11:33:34.266-04:00Thoughts on the Death of Brook's Law<div>Brooks' Law is wrong.<br /></div><div><br /></div><div>Brooks' Law (the pithy version) is: "adding people to a late project makes it later." There is some truth to that, but it's more nuanced. A bad manager staffs up a late project (in an act of desperation?) to make a deadline. However, Bertrand Meyer (and Steve McConnell and Barry Boehm) believe—based on empirical evidence—that judiciously adding people to a project can shorten its schedule.<br /></div><div></div>You <i>can</i> shorten a schedule by adding more people...however there's a limit. You can only shorten the schedule for a software project by (up to) 25%. Since adding people to a software project means adding cost, what this is really saying is you can spend more to get your thing up to 25% more quickly. However, the reverse is not necessarily true. Sometimes you can take people off a project, give the remaining people more time, and get your thing for less money; sometimes you cannot.<br /><div></div><div> This has been known for about 20 years and yet Brooks' Law has survived as a
folk wisdom, and Bertrand Meyer wants to get the word out. That's why he
wrote <a href="https://cacm.acm.org/magazines/2020/1/241718-in-search-of-the-shortest-possible-schedule/fulltext" rel="nofollow" target="_blank">"In Search of the Shortest Possible Schedule."</a></div><div></div><div>I grew up on Brooks' Law, so I'm trying to absorb this. Brooks' Law seems right to me, and in <a href="https://stevemcconnell.com/articles/brooks-law-repealed/">"Brooks' Law Repealed?"</a> Steve McConnell describes an experience that seems familiar:</div><div><blockquote>To those of us who have been around the software-project block a few
times, the claim feels true. We’ve participated on projects in which new
people are brought on at the end. We know the irritation of having to
answer questions from new staff when we’re already feeling overwhelmed
about our own work. We’ve seen new hires make mistakes that set the
whole project back. And we’ve experienced additional schedule slips even
after staff has been added to a late project.</blockquote></div><div>McConnell squares this experience with the new sans Brooks' Law world by pointing out Brooks' Law does apply in certain circumstances, but projects are poorly estimated and poorly tracked. The result is not knowing whether you are in the Brooks' Zone or whether there is enough project left for new hires to pay off the productivity lost to training them.<br /></div><div></div><div>I'm not sure I buy that. I think the idea of estimation is fundamentally flawed. I can't help but feel like this is saying, "We're doing a bad job. Do better!" I'm more and more convinced that breaking work down, estimating the pieces, and rolling it back up is a terrible way to estimate. It fails to account for variability, and padding estimates is not the solution.</div><div></div><div>And sure, better project tracking seems like a good thing. It is a necessary first step, but having the data isn't enough, you also have to interpret and extrapolate it. Probably the best thing that can be done with tracking data is to let it <a href="http://paul.stadig.name/2017/02/continuous-planning.html">empirically drive estimations</a>.<br /></div><div></div><div>I cannot deny the empirical evidence. You can pay more to shorten a project by up to 25%, but there are some intriguing questions that pop up: Why? Why 25%? Why doesn't it always work backwards? Could you repeat the process with a revised schedule and cut another 25%? How would knowing about this bias estimations?</div><div></div><div>I think what I take away from this is that Brooks' Law has narrower application than I initially expected and adding people to a project <i>can</i> bring it to completion more quickly.<br /></div>Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-61291321620543600142019-11-22T08:12:00.000-05:002019-11-22T11:17:28.273-05:00On Writing (Code)Introspection into my own process for making software has led me to believe that <a href="https://twitter.com/pjstadig/status/1159083645405192193">writing prose</a> and programming are <a href="https://twitter.com/einarwh/status/1197468002670788615">fundamentally the same</a>. This was further highlighted for me when I read Stephen King's <i>On Writing</i>. I saw interesting parallels between the way he likes to work and the way I like to work. Also my code is a horror story.<br />
<br />
King likes to write a first draft as quickly as possible. He does this to get the story out. He doesn't worry about character development or even holes in the plot. Those he will fix up in the second draft. The first draft is about getting onto paper the good bones of a story. Then he lets the first draft sit for a couple of weeks.<br />
<br />
When he can come back to the first drift and it looks familiar but not quite—like it was written by his doppelganger—then he has enough distance to make the second draft. That's where any story issues are fixed and things are smoothed out and tightened up. His goal is to make the story 10% shorter.<br />
<br />
When I write code:<br />
<br />
<ul><li>I like to write code breadth-first so I can see all the moving parts and that everything fits together.</li>
<li>I like to let it sit for a couple of minutes, or hours, or days—whatever I can afford.</li>
<li>I like to smooth things out and tighten things up in the second draft—things like function names, refactoring, etc.</li>
</ul><br />
Maybe writing code isn't exactly like writing prose. Maybe it's more like knitting or woodworking or cycling or <a href="https://www.youtube.com/watch?v=OUZZKtypink">solving crimes</a>. I think part of the reason that programmers tend to see their work in everything, is that programming is really just process for solving problems and <a href="http://paul.stadig.name/2016/03/making-fake-things.html">giving form to thoughts</a>. Those skills can be applied to many different domains. Maybe in that way it actually is more like writing than other hobbies?<br />
<br />
At least I didn't say that programming is like gardening. I actually appreciate <a href="http://paul.stadig.name/2018/05/gardening.html">gardening</a> as a hobby because of the fact that it is entirely different than making software!<br />
Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-42559952588075565642019-11-12T18:15:00.000-05:002019-11-12T18:15:26.278-05:00How to Survive Life UndamagedHow does one survive life undamaged? Seriously... if you figure it out let me know. I particularly mean self-inflicted damage. You can't guarantee anything about how other people act or think.<br />
<br />
Here is what I have tried (am trying):<br />
<br />
<b>1. Be willing to listen to others.</b><br />
<br />
You don't have to listen to everyone, but be aware that you risk losing valuable perspective and insight, even if it is insight about how not to do things. You should probably be willing to listen to some people that you have shut out. If you are not uncomfortable, then you are not growing.<br />
<br />
Everyone has a story. You can either imagine a person's story, categorize and label them, and treat them as if that story is true, or you can actually listen. You can listen to what someone believes while thinking about why they're wrong and how you will show them, or you can try to inhabit their space and see the world the way they see it. You don't have to stay there, but you will take lessons from it.<br />
<br />
<b>2. Be skeptical by default.</b><br />
<br />
To listen to others and inhabit their space does not mean uncritically accepting their ideas. You can be empathetic, you can entertain an idea, without being swept away. You can also question if a person is wrong without imagining they are your enemy.<br />
<br />
If some idea pushes through your skepticism, accept it. It is OK to move a little closer towards an "opposite" view.<br />
<br />
<b>3. Don't try to convince everyone.</b><br />
<br />
This can be particularly painful when it is someone you respect and/or love. It may take time. Or it may never be. Life is sour sometimes. Don't make it more sour than it needs to be.<br />
<br />
If you are skeptical by default, then maybe others are as well? If you are listening to others and inhabiting their space, then you must know that it will not be easy to convince everyone. You can let that bother you. You can turn it into an obsession, or you can focus on the good (what little you estimate there to be) and seek friendship.<br />
<br />
These are three things you can try. If you refuse to listen to others and make it your life goal to convince everyone how they're wrong. You will have a life filled with stress and bitterness. Or you could listen--critically--and seek friendship over rightness.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-91646203968810598852019-11-06T20:24:00.001-05:002019-11-06T20:24:02.773-05:00Beyond TechniquesI firmly believe that writing software is a creative act. I believe that a construction project---or any other linerally presenting process---is the wrong analogy for software development. A much better one is writing prose, which is a process of writing and rewriting and rewriting, and sometimes throwing it all out and starting over. In the same way a software engineer names and renames, factors and refactors, etc., then extends the functionality by doing it all again. This all has to fit into the larger code base in a way that makes for a consistent whole. This requires judgement and taste.<br />
<br />
To come at the argument from the other side, if you spent your days rewriting the same code over and over, then you would split it into more generic functions and perhaps a library. If you're solving the same problem that someone else has solved, then you would just reuse their software. This does in fact happen, and yet software engineers still get paid to work, and the value they generate is in the new stuff that requires creativity. Sure, there are some inefficiencies that mean people rewrite software that they could have just reused, and you could argue that the new things to be done are increasingly marginal, but on the whole software engineers would be out of work if there were not new things to create. Ergo writing software is a creative act.<br />
<br />
The next reasonable question is: <a href="http://paul.stadig.name/2018/09/manufacturing-creativity.html">how does one create?</a> I believe that fundamentally, and on average, one cannot directly induce creativity. This isn't entirely satisfying. I wish creativity was a predictable, mechanical process. Is it possible to discover and categorize tools for thinking? I'm still exploring that. Here's an example: to get a fresh perspective on your problem think not about how to solve it, but how to avoid failing to solve it. Instead of trying to figure out how to build a stronger bridge, maybe you'll gain insight by considering how to avoid building a weaker bridge. This technique for finding fresh perspective...does it not make creating into a predictable, mechanical process?<br />
<br />
I actually like techniques. I believe the learning process is (or is best) technique based. Think about learning to cook. You start by following the recipe. Then you start to improvise. You understand that this ingredient is for flavor and can be adjusted to taste, but that ingredient provides aeration and messing with it too much will produce a dense, inedible result. Eventually you move beyond techniques and start to construct your own recipes from first principles. A poor education either starts with first principles, which a beginner does not have the experience to appreciate, or focuses too much on techniques divorced from context, which frustrates learners because they don't know how to properly <i>apply</i> techniques. I intended my book <i><a href="http://leanpub.com/clojurepolymorphism/c/CYleb0NF9qJX">Clojure Polymorphism</a></i> to position itself as a way to explore varied applications of the same tools and techniques, which I hope will guide readers from knowledge into wisdom.<br />
<br />
The writing curriculum we use in homeschooling our children is technique based. A child reads an essay and creates an outline of the important points (there's a whole set of techniques for that). Then he will rewrite the essay from the outline applying techniques like "ly"-words, openers, and clinchers. This removes the hand wringing about <i>what</i> to write about. (Incidentally, if you read Ben Franklin's autobiography, this is the way he improved his own writing.) What if computer programming was taught this way? Here's a program that consumes a CSV file, changes some things, then writes it back out. First write out a high level algorithm for the program, then rewrite it using 3 of the 15 techniques you've learned (concurrency, multimethods, asynchronous channels, etc.). Repeat. I think this would produce wiser programmers.<br />
<br />
However, eventually you need to move beyond techniques. Leaning on techniques is just shifting the problem elsewhere. Coining techniques is an attempt at mechanizing creativity, but instead of making creating mechanical, you are now faced with choosing which technique to apply to generate a creative solution. Now the creative leap must occur earlier in the process. You can try each technique in a brute force search, but I doubt many (if any) would consider that "creative," nor is it particularly efficient. The experience of repeatedly applying techniques should help you develop judgement and taste, so when you're faced with inventing from first principles, you have some gut sense or guiding aesthetic. That guiding aesthetic is not some set of rules for when to apply which techniques. If it were, then it would become a "library," so to speak. Anybody---or any machine--could evaluate the rules and perform the steps.<br />
<br />
Techniques are great for broadening your learning, but they are not self applying. Learning all the techniques does not help you invent new techniques, nor will it induce creativity. You can know how to cut impeccable dovetails and square up lumber and drill straight holes, but that does not mean you will create beautiful furniture.<br />
<br />
How does one develop this guiding aesthetic? You read a lot of other people's code. You write a lot of code. You read a lot of books. You work hard. Sorry, that's the best I've got. This is a continuing journey...to be continued.<br />
<br />
However you do it, the end goal should be to move beyond techniques.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-58702726422393466792019-10-30T17:44:00.001-04:002019-10-30T17:44:07.374-04:00Virtual Machine Oriented DevelopmentMost computing devices that we have today---desktop, laptop or phone---are capable of computing any program that can be computed. There's a bit of equivocation there. What is meant is that anything that a human can manually calculate via rote, mechanical process can also be done by computer. This is the Church-Turing Thesis.<br />
<br />
I've never really stopped to think, but what would a non-universal computing machine look like?<br />
<br />
.... <br />
<h4>Longevity</h4>Several years back I suffered a bout of jealousy. I thought about engineers in other fields who build roads or buildings or even cars. A civil engineer can imagine something they've build standing still amidst the blur of 100 years passing by. An automotive engineer can imagine a car they've designed still driving the roads in 20 years.<br />
<br />
<i>I don't think a single line of code that I wrote 3 or more years ago is still in production</i>, and that's ignoring all the code that I wrote that never made it into production. <br />
<br />
These are the kinds of things you start to ponder as you reach the ripe, old, programmer retirement age of 33.<br />
<br />
But then a funny thing happened. I played <a href="https://en.wikipedia.org/wiki/Sam_%26_Max_Hit_the_Road" target="_blank">Sam & Max Hit the Road</a> ... on my Android <a href="https://twitter.com/pjstadig/status/510607116730920960" target="_blank">phone</a>.<br />
<br />
Here was a game released in 1995 and I was playing it on my Android device in 2014. How did that happen? Well, when LucasArts designed the game "Maniac Mansion," they decided to create a scripting language and write the game in that language, and they used that scripting language for many of the games they made. I have several original LucasArts games on CD. Some are PC versions some are Mac versions.<br />
<br />
Over the years as I feel the nostalgia hitting me I'll grab the game files from the CD and download ScummVM for whatever platform I'm on at the time. I copied the game files to my phone and downloaded ScummVM from the Play store. That's how it happened.<br />
<br />
....<br />
<h4>Data is Code</h4>I had been exposed to Lisp, and even written a lot of Lisp before I finally had my enlightenment about macros and metacircular interpreters. I remember vividly reading <a href="https://mitpress.mit.edu/sicp/full-text/book/book.html" target="_blank">Structure and Interpretation of Computer Programs</a> and seeing Scheme put to use creating simple yet powerful abstract interpreters. The author's start "simply" with interpreters that add new programming paradigms to Scheme. Then they proceed to simulating a register machine and writing an assembler and compiler for it. This happens in the last chapter, a space of ~100 pages.<br />
<br />
It is a Divine joke that Structure and Interpretation of Computer Programs and The Art of Computer Programming had their titles swapped, because---while I don't wish to denigrate TAOCP which is a depth of amazing riches---SICP is about art, and in a metacircular way it is art.<br />
<br />
It is too easy as a Lisper to understand the world this way, but data is always code. In Forth, 5 is not a number, it is an instruction to push the value 5 onto the top of the program stack. Your program receives a program as input. It receives files, network packets, key presses, and mouse clicks. It interprets this program and produces output.<br />
<br />
A PDF file can cause a buffer overrun in a PDF reader because each byte is literally an instruction to your program-as-interpreter to "write a value at the current location and move to the next location" (or at least it can be if your program-as-interpreter has flawed semantics).<br />
<br />
This is not a property of Lisp, it is a property of the stored program computer, Universal Turing Machine, von Neumann architecture. Code and data are made of the same stuff and stored in the same memory.<br />
<br />
....<br />
<h4>The Non-Divine Joke</h4>In his talk <a href="https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript" target="_blank">"The Birth & Death of JavaScript,"</a> the 2014 version of Gary Bernhardt extrapolates where JavaScript and asm.js will take the world in 2035 (after an apocalyptic global war, of course). The punch line is that instead of JavaScript running on computers, computers run on JavaScript. This happens through a comical stack of emulators emulating emulators that emulate. Actually I think it's compilers transpiling compilers that transpile transpilers, but...same difference.<br />
<br />
But like every joke there's a bit of truth to it.<br />
<br />
Paul Graham writes about <a href="http://www.paulgraham.com/progbot.html" target="_blank">"Programming Bottom-Up"</a> where you build the language "up" to your program to the point that actually expressing your program becomes somewhat trivial. You're building a domain specific language to solve exactly the problem you have. Again, this is all too natural for Lispers, but everyone does it.<br />
<br />
The act of programming is to turn a universal computing machine into a limited computing machine. You build out data types and operations to focus the abilities of the computer into a specific domain. Programmers instinctively understand this, which is why we find it so funny that---in a twist of irony---a universal computing machine emulates a universal computing machine emulating a universal computing machine.<br />
<br />
....<br />
<h4>Virtual Machine Oriented Development</h4>I started thinking about Virtual Machine Oriented Development because I was concerned about the transience of my legacy. I noticed that there were software products that were still around 20 years after they were written. I started seeing a VM underneath them.<br />
<br />
But having thought about it more, I don't think that Virtual Machine Oriented Development is just about legacy. I think it might clarify the design process to be explicit about the fact that we're designing a limited computing machine that analyzes sales data. What are the data types? What are the operations? If you have power users, maybe they'd even like a scripting language that can describe which data to import and then how to analyze it?<br />
<br />
You might find then that you've abstracted your problem into a computation model that will become valuable for years. Maybe you'll end up rewriting the interpreter for this language several times, and all the while users can keep using their existing scripts.<br />
<br />
.... <br />
<h4>Conclusion</h4>What does a non-universal computing machine look like? It looks like every program you've ever written.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-29416502910374797852019-08-07T13:41:00.000-04:002019-08-07T13:41:44.707-04:00Speed ReadingMy reading habits are lumpy. I find I'm either not reading anything, or I'm reading seven books at once, and when I am, I wish I could speed read. I have in the past read books on speed reading, and they usually boil down to techniques like increasing your eye span, eliminating regression, eliminating subvocalization, etc. The theory seems to be that your brain can work much faster than your eyes, and you just need to eliminate bad habits, and establish some better ones, so you can get the words into your brain faster.<br />
<br />
I do feel like there's something to this. In my experience, my mind tends to wander as I'm reading, and sometimes it's easier to skim since that keeps my mind busier trying to assemble random bits and pieces into a comprehensive whole. That seems to be the idea with this new speed reading book that I picked up called "Speed Reading With The Right Brain." The author claims that by engaging your brain in conceptualizing what you're reading as you're reading, you increase comprehension, and it is through increased speed of comprehension that you achieve increased reading speed. I want to believe, but I'm still skeptical.<br />
<br />
What I would love is some reference that approaches speed reading from an empirical approach, you know, with science. What do we know that actually works based on research? Well...you may not like the answer.<br />
<h4>Speed Reading is Fake</h4>In looking for an empirically backed approach to speed reading, I came across <a href="https://journals.sagepub.com/doi/10.1177/1529100615623267">"So Much to Read, So Little Time: How Do We Read, and Can Speed Reading Help?"</a> This article is based on decades of reading research and cognitive psychology. One of the authors—who passed away from cancer a few days after the first draft—proposed the article "because he felt that it was important to share the knowledge we have gained from experimental science with the general public."<br />
<br />
While there may be savants who can read impossibly fast without sacrificing comprehension, controlled studies show that for normal people learning to read faster means comprehending less. When you learn to "speed read" you are actually learning to skim. "Taylor notes that [Evelyn] Wood 'repeatedly stated that her people are not skimming, but rather are reading' (Taylor, 1962, p. 65). Based on recordings of their eye movements, however, Taylor concluded that they closely resembled the eye movement patterns produced during skimming (Taylor, 1965; see also Walton, 1957)." Also from the article:<br />
<blockquote>The speed readers did better than skimmers on general comprehension questions about the gist of the passages but not quite as well as people reading at normal speed. … The advantage of trained speed readers over skimmers with respect to general comprehension of the text was ascribed by Just and colleagues to an improvement in what they called <i>extended inferencing</i>. Essentially, the speed readers had increased their ability to construct reasonably accurate inferences about text content on the basis of partial information and their preexisting knowledge. In fact, when the three groups of participants were given more technical texts (taken from <i>Scientific American</i>), for which background knowledge would be very sparse, the speed readers no longer showed an advantage over the skimmers, even on general questions.</blockquote>According to the article, learning to speed read may improve your ability to skim, but only for familiar subjects. This isn't necessarily bad news for two reasons:<br />
<ol><li>You <i>can</i> improve your reading speed, just not as dramatically as speed reading advocates suggest.</li>
<li>Learning to skim effectively is a useful skill to learn.</li>
</ol><h4>Improve Your Reading Speed</h4>The average person reads between 200-400 words per minute. What is the best way to improve your reading speed? Practice. Unsurprising. Perhaps a little disappointing? There are a couple ways that practice increases your reading speed, but they basically break down to improving your language skills:<br />
<ul><li>better vocabulary</li>
<li>exposure to more writing styles</li>
</ul>The broader your vocabulary, the more familiar you are with words and styles, the more quickly you can read. Your eyes fixate less on words with which you are familiar, so they move more briskly across the page. Your familiarity with style allows you to anticipate better how a sentence will end when you've only read part of it. Also, "written language uses some vocabulary and syntactic structures that are not commonly found in speech, and practice with reading can give people practice with these."<br />
<br />
The more you <i>do</i> reading, the faster you'll get.[1]<br />
<h4>Learn to Skim Effectively</h4>Effective skimming is mostly about trying to extract structure and important ideas from a text. Scan for:<br />
<ul><li>headings</li>
<li>paragraph structure</li>
<li>key words</li>
</ul>According to the article, "Research has shown that readers who pay more attention to headings write the most accurate text summaries (Hyönä, Lorch, & Kaakinen, 2002)."[2] You can also do things like:<br />
<ul><li>scan the table of contents</li>
<li>read the first paragraph of each section</li>
<li>read the first sentence of each paragraph</li>
</ul>Again from the article:<br />
<blockquote>The eye movements revealed that skimmers tended to spend more time reading earlier paragraphs and earlier pages, suggesting that they used the initial parts of the text to obtain the general topic of passages and provide context for the later parts that they skimmed in a more cursory way. Therefore, effective skimming means making sensible decisions about which parts of a text to select for more careful reading when faced with time pressure. In fact, Wilkinson, Reader, and Payne (2012) found that, when forced to skim, readers tended to select texts that were less demanding, presumably because they would be able to derive more information from such texts when skimming. This kind of information foraging is a useful method of handling large amounts of text in a timely manner.<br />
</blockquote>You know what else helps you skim effectively? Practice. Practice gives you a broader base of knowledge and experience to draw on:<br />
<blockquote>That [knowledge/experience] may be the basis for some anecdotes about the speed-reading abilities of famous people, such as that President Kennedy could pick up a copy of the <i>Washington Post</i> or the <i>New York Times</i> and read it from front to back in a few minutes. However, consider the knowledge and information that someone like Kennedy would bring to the task of reading the newspaper. As president, he was briefed about important world events each day and was involved in generating much of the policy and events reported in the newspaper; thus, he probably had first-hand knowledge of much of what was described. In contrast, the average person would come to such a situation with very few facts at his or her disposal and would probably have to read an article rather carefully in order to completely understand it. To read rapidly, you need to know enough about a topic to fit the new information immediately into what you already know and to make inferences.<br />
</blockquote>Of course, the downside of skimming is that you are skipping over portions of text resulting in lower comprehension. However, if you're looking to get a general overview or find one specific fact, then it can be useful. It may also be good for a first pass at a text before reading in depth.<br />
<h4>Conclusion</h4>I do feel as though my mind wanders when I read, and I wonder whether there is a way to better engage my mind when reading. Perhaps conceptualizing or visualizing or some other way of focusing more would help comprehension and speed. If and until I and figure that out, I can improve my reading speed by practicing and getting better at skimming, when skimming makes sense.<br />
<br />
<b>Footnotes:</b><br />
[1] I wonder (off-the-cuff, anecdotally, non-scientifically) whether writing more also improves reading, for the same reason: it improves language skills.<br />
<br />
[2] Interestingly, this means that as an author you bear some of the burden for helping readers quickly consume your writing. I had started to shy away from listicle style blog posts, thinking I'd try to contribute to a more high minded discourse that rewarded effort in reading and comprehending. This article has more headings and lists...maybe I'll do a little of both. :)<br />
<br />
Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-33024268071058535712018-09-12T13:27:00.001-04:002018-09-12T13:27:47.544-04:00Engineering SynthesisWhat is the nature of software engineering? How is it different from other kinds of engineering? Why is it so hard?<br />
<br />
These are questions I have struggled with for many years. In my work, I have seen more than a few different takes on software engineering. Even when things start out right they seem to end at a sad place, and this has bothered me. Is it really impossible to do software "right?" Or do we just have the wrong idea about how to do it? Software engineering is a relatively new discipline, so maybe we still have some things to learn.<br />
<br />
I'm going to draw from several sources here, and try to synthesize some ideas about engineering, science, and art. I feel kind of silly writing all these words summarizing other sources when you could just go watch the videos and read the papers yourself. But for my purposes these sources are a framework for discussing and organizing my thoughts.<br />
<h4>Real Software Engineering </h4>"Real Software Engineering" by Glenn Vanderberg <br />
<a href="http://www.infoq.com/presentations/Software-Engineering">http://www.infoq.com/presentations/Software-Engineering</a><br />
<br />
Glenn Vanderberg is a software practitioner, and he is reacting to the claim that software engineering needs to grow up and become a "real" engineering discipline. But what is "real" engineering?<br />
<br />
There are actually a couple of different versions of this talk available online, and in one Vanderberg takes some time to talk about "how did we get here?" He digs up some history on the NATO conference in 1968 whose goal was to define software engineering. He then talks about some commonly believed myths about engineering, about how different engineering disciplines use different methods, then brings it back around to software engineering and applies what we've learned.<br />
<br />
There were three big ideas from Vanderberg's talk that stood out to me:<br />
<br />
<ol><li>The model of scientists discovering knowledge and engineers then applying that knowledge is wrong.</li>
<li>Software engineering is unique because we spend a lot of time crafting design documents and models and a trivial amount of time actually producing the end product, which is the exact opposite of most other branches of engineering.</li>
<li>Agile methods are the best methods we have and for all practical purposes they <i>are</i> software engineering.</li>
</ol><br />
When I first watched Vanderberg's talk years ago, the big idea was the second—about the uniqueness of software engineering—but coming back to it later I was surprised to find this first idea echoed in other sources. Vanderberg gives a examples of advances in knowledge that came not from academics or scientists, but instead from practitioners and engineers. One example is <a href="https://en.wikipedia.org/wiki/Robert_Maillart" target="_blank">Robert Maillart</a>. He was an engineer who revolutionized the use of reinforced concrete in bridge building. He did this before there were mathematical models to explain the uses and limits of reinforced concrete. Scientific advances are just as likely to come from practitioners as from academics.<br />
<br />
My second idea from Vanderberg is that among the kinds of engineering, software engineering has some unique characteristics. If one were to build a skyscraper, one would construct designs, models, blueprints, then those would be handed over to a construction team who would construct the building. The blueprints are relatively cheap to produce. The actual construction is error prone and requires a lot of materials and labor. Looking at this process, it would seem very important to focus as much effort on the architecting of blueprints as possible. Once you've laid the foundation, it is expensive to rethink the footprint of the building.<br />
<br />
If I were to apply this process to software engineering I might do something like the following: Hire a system architect to create a design document, and then get a bunch of code monkeys to actually construct the system by writing code. In my interpretation, the requirements and design document are the model and blueprints, the system architect is the architect, and the code monkeys are the construction crew. Vanderberg picked up an insight from Jack Reeves in the 90's: this interpretation is wrong.<br />
<br />
Customers do not pay for code, they pay for an executable. They want a working system. That is the constructed product, and it is the compiler not the code monkeys that produces it. The code is the design document and mathematical model. The code monkeys are not the construction crew, they are the architects. Source code and its type systems are a mathematical model that can be formally verified. Using a compiler, I can produce a prototype from that model instantaneously and for free. The source code also contains documentation, and to the extent that it has automated tests (also written in the same language) it is self verifying. Modern high level languages and domain specific languages can even be mostly understood by domain experts.<br />
<br />
Software engineering is a unique engineering discipline, because source code is a unique artifact. We should be careful not to take engineering methods from a discipline where constructing a prototype is time consuming and expensive and one is necessarily forced to spend more time on up front design to avoid that cost. This will lead nicely into my third big idea, that agile methods are for all practical purposes the best kind of software engineering we know.<br />
<br />
When I say agile methods, I mean agile with a little 'a'. I'm thinking (vaguely) of an incremental tinkering approach, versus a straight line mechanical approach. I'm thinking of a technician approach, versus a technique approach. Or as the original Agile Manifesto said, "people over process." I think they got that right. What is interesting is they were not the only ones to get it right. The original NATO conference on software engineering (1968!) had it right before they had it wrong.<br />
<br />
There were two NATO conferences that were a year apart. At the first session Alan Perlis <a href="http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF" target="_blank">summarized</a> the discussion on system design:<br />
<br />
<ol><li>A software system can best be designed if the testing is interlaced with the designing instead of being used after the design.</li>
<li>A simulation which matches the requirements contains the control which organizes the design of the system.</li>
<li>Through successive repetitions of this process of interlaced testing and design the model ultimately becomes the software system itself. I think that it is the key of the approach that has been suggested, that there is no such question as testing things after the fact with simulation models, but that in effect the testing and the replacement of simulations with modules that are deeper and more detailed goes on with the simulation model controlling, as it were, the place and order in which these things are done.</li>
</ol><br />
What he is saying is:<br />
<br />
<ol><li>Test early, test often.</li>
<li>Take a breadth first approach mocking out what you need so you can get a sense for the overall system.</li>
<li>Iteratively refine the system and replace the mocks.</li>
</ol><br />
That is suspiciously similar to an incremental development method. Between the 1968 NATO conference and the 1969 NATO conference things changed, and there was a clear tension between those who thought programming was best done by an expert technician, and those who thought programming was best done mechanistically by someone taught a body of scientific techniques. At the end of the 1969 conference, Tom Simpson gave a talk called <a href="https://catenary.wordpress.com/2008/05/14/masterpiece-engineering/" target="_blank">"Masterpiece Engineering"</a> which is oozing with conflicts of technician vs. technique.<br />
<br />
There was definitely a lot of political maneuvering at the NATO conferences. There are some other resources you can investigate if you'd like. The point is the seeds of agile were there, but for some reason we ended up with 33 years of waterfall.<br />
<h4>Engineering(,) A Path to Science</h4>"Engineering(,) A Path to Science" by Richard P. Gabriel <br />
<a href="http://www.infoq.com/presentations/Mixin-based-Inheritance">http://www.infoq.com/presentations/Mixin-based-Inheritance</a><br />
<br />
"Structure of a Programming Language Revolution" by Richard P. Gabriel<br />
<a href="http://dreamsongs.com/Files/Incommensurability.pdf">http://dreamsongs.com/Files/Incommensurability.pdf</a><br />
<br />
Richard Gabriel's talk comes from an interesting perspective. He was involved in the Lisp community and has an academic background (he earned a PhD), but is not an academic. After working as a practitioner, he went back to school to earn a Masters of Fine Arts. Upon returning to the technical community, he felt a paradigm shift had happened while he was gone. The conferences he used to attend had been renamed and were now focused on academics instead of practitioners. His entire field--Lisp systems engineering--and its journals had been deleted.<br />
<br />
Then he was given the first scientific paper on mix-in inheritance. Being familiar with previous work done on Lisp based inheritance systems, he felt that this paper was using the same terms to describe some of the mechanisms from the Common Lisp Object System, but the terms had different meaning. Gabriel felt he was experiencing incommensurability, that a paradigm shift had happened from an engineering focus to a scientific focus, and now "scientific" papers were being written that described, as new, things that engineers had already known, using the same terms but with different meanings.<br />
<br />
The talk is definitely worth watching. It is an interesting personal story intertwined with technical discussions of the previous work versus the paper he had been given. It is an exploration of whether incommensurability can actually happen and to what extent. He also challenges the myth that science always precedes engineering.<br />
<br />
I'm honestly not sure whether Gabriel intended his talk and paper to have a single point. Maybe he is mostly interested in relating his personal experience, but this is what I took away:<br />
<br />
<ol><li>In general, science does not always precede engineering, and in particular the relationship between computer science and software engineering is even more complex, because the engineers literally create the reality that the scientists study.</li>
<li>There are two approaches to software: the systems approach, and the language approach.</li>
<li>Making engineering subservient to science means throwing away the progress that engineers can and do make.</li>
</ol><br />
This was actually the first talk that started the wheels turning for me on the relationship between science and engineering. I had been told in college that scientists expand the body of knowledge and engineers apply that body of knowledge. Gabriel uses as his example the steam engine. When the steam engine was invented the popular theory used to explain its operation was the Caloric Theory of heat, which stated that there was an invisible, weightless, odorless gas called "caloric" that permeated the Universe. The amount of caloric in the Universe is constant, and its interaction with air molecules can explain heat and radiation, and from it you can deduce most of the gas laws. The Caloric Theory was a useful theory with predictive power. When Laplace adjusted Newton's pulse equations to account for caloric, he was able to more accurately predict the speed of sound.<br />
<br />
Eventually the Caloric Theory was replaced by Thermodynamics, and amazingly steam engines continued to work! The steam engine was developed by mechanics who observed the relationship between pressure, volume, and temperature. Whether its operation was explained by the Caloric Theory or Thermodynamics made no difference to them. Yet, an engineer's invention can and does spark the curiosity of a scientist to develop a theory to explain how it is that an invention works. This is even more true in the case of computer software.<br />
<br />
The second moral I drew from Gabriel's talk is that there are (at least) two approaches to software: a systems approach and a language approach. Gabriel acknowledges that at first he thought the incommensurability that he saw was a difference between an engineering paradigm and a scientific paradigm, but eventually he saw it as a more technically focused conflict between a systems paradigm and a language paradigm. Perhaps what Gabriel means is that you can approach either systems or languages from an engineering or a scientific perspective. However, I tend to see systems versus languages as engineering versus science.<br />
<br />
The systems paradigm views software as interacting components forming a whole; real stuff doing real things. The language paradigm views software as abstract signs and rules of grammar conveying meaning. Good design, from a systems perspective, comes from a skilled technician following good design principles (I would even call it aesthetics). Good design, from the language perspective, comes from a relatively less skilled technician working within a language that from the outset excludes bad design through grammatical rules and compilers. The system approach tends to view software as a living organism that is incrementally poked and prodded, changed and observed. The language approach tends to view software as a series of mathematical transformations, preserving meaning. If each of the paradigms were a theory of truth, the systems paradigm would be correspondence, and the language paradigm would be coherence.<br />
<br />
I see system versus language as engineering versus science. I view engineering as a bottom up, incremental, tinkering approach, at least when it comes to software and the way I like to practice software engineering. I view science as a top down, formal, mathematical approach. I actually like both, and I think both have their place, but when engineering is made subservient to science, we're actually losing something very important. When engineers are shut out of conferences and journals, there are discoveries that will be left unpublished, and new scientific theories left untheorized. (This was what Gabriel saw happening.)<br />
<h4>Computer Programming as an Art </h4>"Computer Programming as an Art" by Donald Knuth<br />
<a href="http://dl.acm.org/ft_gateway.cfm?id=1283929&type=pdf">http://dl.acm.org/ft_gateway.cfm?id=1283929&type=pdf</a><br />
<br />
For those with even a cursory exposure to Computer Science, Donald Knuth needs no introduction. Knuth is coming from an academic perspective, but even for an academic his perspective is a bit unique. He has created and maintains several large open source software projects. This is his ACM Turing Award lecture given in 1974. He starts by quoting the first issue of the Communications of the ACM (1959). It claims that for programming to become an important part of computer research and development (to be taken seriously) it needs to transition from being an art to a disciplined science.<br />
<br />
The big idea I draw here is: Programming can be art (in the "fine art" sense), which means it is (at least sometimes) a creative endeavor.<br />
<br />
Knuth first explores the definition of "art" and "science." He looks at their use over time. Their use was (and is) not consistent. At times "science" and "art" are used interchangeably. "Art" was used to describe something made of human intellect, not nature. Eventually "science" came to mean "knowledge" and "art" came to mean "application." Though even that usage is not universal. To Knuth an "art" is something that is not fully understood and requires some aesthetics and intuition. A "science" is something well understood. Something that can be mechanized and automated. It is something that can be taught to a computer. Can computer programming be taught to a computer?<br />
<br />
Knuth does not think that programming can ever be fully automated. However, it is still useful to automate as much as possible, since it advances the artistry of programming. He believes, and cites others, that progress is made not by rejecting art in the name of science, nor science in the name of art, but by making use of both. He makes reference to C. P. Snow's "The Two Cultures" as an example of another voicing concern about separating art and science. At this point when he speaks of art he means something more along the lines of "fine art" than "engineering."<br />
<br />
Knuth goes on to talk of creativity, beauty, art, and style. He hits on how sometimes resource constraints can force a programmer to come up with an elegant solution, and this has an artistic aspect to it. He also encourages people to, when it comes to programming, make art for art's sake. Programs can be just for fun.<br />
<br />
Knuth's talk is focused on the act of programming, and when he deals with engineering versus science he means with respect to the act of programming. To what extent can the act of programming be made automatic? To what extent must it remain a human act of creativity? This is a little further afield of the previous sources, but Knuth's insistence on seeing programming as a creative act is the big idea I drew from his talk, and is really the point of his talk.<br />
<br />
Given that programming can sometimes be a creative act, it raises a lot of questions in my mind. Is programming always a creative act? If programming is a creative act, how should a programming project be managed? Is the high failure rate of software projects related to this? Perhaps this ties back into Tom Simpson's "Masterpiece Engineering" satire. Imagine a project manager with a room full of artists creating Gantt charts and task dependency graphs to plan out the creation of a new masterpiece!<br />
<br />
On the other hand, nothing appeals to the ego more than seeing oneself as a grand master of art. There should be a measure of moderation here. I think there is benefit to trying to understand programming as an artistic (or at least "creative") endeavor, whatever that means, but we should not go crazy with hubris.<br />
<h4>Better Science Through Art</h4>"Better Science Through Art" by Richard P. Gabriel and Kevin J. Sullivan <br />
<a href="https://www.dreamsongs.com/Files/BetterScienceThroughArt.pdf">https://www.dreamsongs.com/Files/BetterScienceThroughArt.pdf</a><br />
<br />
"Better Science Through Art" by Richard P. Gabriel<br />
<a href="https://www.tele-task.de/archive/video/flash/12636/">https://www.tele-task.de/archive/video/flash/12636/</a><br />
<br />
I have already covered some of Gabriel's background, but I will say that having been involved and educated in both a technical field and an artistic field gives him a unique perspective on the relationship between science, engineering, and art.<br />
<br />
I unfortunately don't know much about Sullivan's background, other than he is a professor of computer science at the University of Virginia. His collaboration with Gabriel produced one of my favorite papers ever. I don't know that I can tease out what should be attributed to whom. I will be basing my comments on Gabriel's talk, but I don't intend to attribute everything to him, or to diminish Sullivan's contributions.<br />
<br />
The big ideas I drew from this is:<br />
<ol><li>Science, engineering, and art all have at their core "disciplined noticing."</li>
<li>Disciplined noticing is a skill that requires practice.</li>
<li>The creation of knowledge—even in the case of science—requires an abductive leap powered by creative spark. </li>
</ol><br />
This is a really great talk, and covers a lot of ground. It is entertaining, insightful, and very worth watching. He attacks some common caricatures of science, engineering, and art, and digs into the actual process behind each. In the end, he finds that there are a lot of similarities to the methods in science, engineering, and art. It is a process of exploration, discovery, and verification. He calls it disciplined noticing.<br />
<br />
I have found this to be true in my experience. Just like people have a caricature of science, that it is straight line progress, the monotonic aggregation of knowledge, there's a similar caricature of software development. My experience has been that writing software is a creative, exploratory process. Sometimes I go down an alley, but find that I need to back out and take a different turn. I may write a test, run it, change some code, change a test, run it, think for a while, delete a bunch of code and rewrite it all.<br />
<br />
In my experience this process—writing, evaluating, and rewriting—has much more in common with writing a novel than constructing a building.<br />
<h4>Conclusion</h4>This long meandering post must come to an end. First of all, I would highly recommend looking at each of these cited sources. They will reward you. Perhaps you may even find that I have seen them through my own preconceived notions, and you may draw an altogether different conclusion from them. So be it.<br />
<br />
This "conclusion" is not really a conclusion, but a way-point. I started on this journey to understand the nature of software engineering, how it is different from other kinds of engineering, and why it is so hard. I ended up at a place that intuitively I knew I would end. I will not make an absolute statement. I will say that at least sometimes (and in my experience) software development is a creative process more akin to creative writing.<br />
<br />
I have also seen that there is a tremendous amount of creativity in both engineering and science. I believe that at the core of engineering, science, and art is a drive to understand and influence the world, which requires observation, testing, and evaluation. I don't claim to know how to do software engineering "right," but I don't think we will ever do it right if we refuse to see that creativity (which is at times unpredictable) is a key part of the effort.<br />
<br />
I have learned that both engineering and science are useful for discovering and validating knowledge. Scientists and engineers should collaborate. Neither should be seen a primary at the expense of the other. They can even be seen as external expressions of the same process sometimes using similar tools and techniques.<br />
<br />
I have learned that software is unique in engineering. Whereas a blueprint is a written artifact using specialized notation, the building it describes must be brought into existence through a complex error prone process. Code is written using specialized notation, but the gap from code to execution is much smaller. There are pitfalls and challenges, no doubt, but I would like to see how the nature of what we produce can change how we produce it. I'm still holding out hope that the nature of software can change the face of the human organizations that produce it.<br />
<br />
Practically, what this all means is that a software engineering process should be iterative. It should embrace unpredictability and allow space for the creative process. In the same way that a painter never thinks his painting is complete, software should be developed in a way that continuously produces value, so the project could be closed down and the product shipped at any point, and the customer is still happy with the result.<br />
<br />
So I end back at the beginning with Vanderberg. I don't think that agile is the last word, but I think it is the best we have so far.<br />
Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-20330604559956455082018-09-12T06:21:00.000-04:002018-09-12T06:21:10.127-04:00Manufacturing Creativity<a href="http://paul.stadig.name/2016/03/making-fake-things.html">Previously</a>, I've attempted to convince you that making software is a creative act, and I explored the implications for pursuing and managing software engineering. (By the way, science and engineering are also creative acts, and a great exploration of that idea is "<a href="http://dreamsongs.com/Files/BetterScienceThroughArt.pdf">Better Science Through Art</a>" by Richard P. Gabriel and Kevin J. Sullivan. Love that paper.)<br />
<br />
I've been thinking a lot lately about creativity and how it can be encouraged (even manufactured?). I've also been thinking quite a bit about why people do (or do not) take on ambitious projects, and how to survive a years long ambitious project. I've learned some very interesting things that some day I may write about, but I'd like to share what I've learned about creativity.<br />
<br />
What I've discovered about being creative is that even from people in very different lines of work (actors, writers, artists, programmers, scientists, investors) there's a surprising amount of agreement about how it works. I've also discovered that it is not an innate talent that some people have and some do not. Everyone has the tools to be creative.<br />
<br />
In many ways this goes all the way back to the very first Clojure Conj in October of 2010. Rich Hickey gave a talk titled "<a href="https://www.youtube.com/watch?v=f84n5oFoZBc">Step Away from the Computer</a>"...actually, it had three titles, and it is best known by one of its other titles "Hammock-Driven Development." I was there in person. I came away with the mistaken impression that the talk was about writing software and solving technical problems. I now know that making software is a creative act, and Rich's talk was about how to be creative.<br />
<br />
For Rich, the engine of creativity is the "background mind," which is in contrast to the "waking mind." Your waking mind is your normal mode of operation. It is good at analyzing and thinking critically, but can be too tactical and get stuck in local maxima. Your background mind is good at making connections, thinking abstractly, and synthesizing. It can make the leap past local maxima, unfortunately your background mind cannot be tasked directly. However, you can task it indirectly by obsessively thinking and reading about a particular problem, and, though you can activate it other ways, it is easiest to activate it by sleeping, or relaxing and simulating sleeping (i.e. using a hammock).<br />
<br />
So, creativity is an indirect process of a relaxed mental mode that you task by obsessively thinking about a problem, and whose products you only filter after the fact with your normal critical-analytical mental mode. Now here's the surprising part, almost everyone who attempts to describe their creative process describes it similarly. In his essay, "<a href="http://paulgraham.com/top.html">The Top Ideas in Your Mind</a>," Paul Graham says:<br />
<br />
<blockquote>Everyone who's worked on difficult problems is probably familiar with the phenomenon of working hard to figure something out, failing, and then suddenly seeing the answer a bit later while doing something else. There's a kind of thinking you do without trying to. I'm increasingly convinced this type of thinking is not merely helpful in solving hard problems, but necessary. The tricky part is, you can only control it indirectly.</blockquote><br />
John Cleese gave a talk on <a href="https://www.youtube.com/watch?v=Pb5oIIPO62g">creativity</a>, and he called the background mind "open mode" and the waking mind "closed mode." In your open mode, you are relaxed, less purposeful, curious, and a bit playful. In your closed mode, you are active, determined, and have a critical eye.<br />
<br />
George Land is a business man who <a href="https://www.youtube.com/watch?v=ZfKMq-rYtnc">investigated</a> how to stimulate and direct creativity. He found there are two kinds of thinking: divergent and convergent. Divergent thinking is creating new ideas. Convergent thinking is judging and evaluating ideas. He did a longitudinal study that found that 98% of 5 year olds exhibit divergent thinking, 30% of 10 year olds, 12% of 15 year olds, and only 2% of adults think divergently. As a person gets older, he or she is taught to use both divergent and convergent thinking at the same time. The result is one criticizes and judges ideas before they can fully develop.<br />
<br />
Of peculiar interest to me has been what independent game designer Jonathan Blow—who worked on his successful and influential game Braid for 3.5 years—has said about creativity and <a href="https://www.youtube.com/watch?v=d0m0jIzJfiQ">surviving ambitious projects</a>. (Maybe someday Rich will talk about how he survived his own ambitious projects: how to maintain motivation day-to-day, how to fund it, how to plan and pace it, how to finish it.) The thoughts about ambitious projects are for another time, but what he says about creativity should be familiar by now. Blow says metaphysically you may not buy into the Greek concept of the Muse—nor may he—but functionally it is real. Creativity feels like something external, and you have to get yourself into a relaxed mode to provide opportunity for new ideas, though you cannot guarantee anything.<br />
<br />
<h3>Tools and Techniques</h3><br />
I hope to find more resources on direct techniques for stimulating creative (e.g. instead of thinking about solving a problem think about how to make in worse and avoid that), but for now I've found a lot of agreement about how to encourage creativity in an indirect way.<br />
<br />
<b>Obsess about your problem.</b> If your subconscious mind (or unconscious mind or background mind or whatever you want to call it) is going to solve a problem for you, then it needs information. Rich has a lot of great advice about this. Write down your problem. Write down what you know. Write down what you don't know. Read about your problem. Read about related problems. Pick apart other solutions. Paul Graham in "The Top Idea in Your Mind" says, "It's hard to do a really good job on anything you don't think about in the shower."<br />
<br />
<b>Relax.</b> For Rich this is lying in a hammock and focusing, thinking through all the information you've loaded into your mind. For Blow, a relaxed state of mind is really a pretty active body. He likes to find something purely physical that he can enjoy, like going to a club and dancing. Cleese creates an oasis blocking off time and setting aside other concerns. He gives himself enough time that he can work through all the TODOs that pop into his head. He writes them down for later, and gets back to being relaxed and playful.<br />
<br />
<b>Pace yourself.</b> Cleese recommends, if you're going to try to set aside time for creativity, to limit it to no more than an hour and a half, because you'll need a break. If you need more time, then do it again the next day.<br />
<br />
<b>Be Playful.</b> George Land found that children are more creative. Cleese finds being in a playful mood conducive to creativity, especially when collaborating with others. Play, imagination, daydreaming all come from or lead to a relaxed state of mind, which accesses your creative mechanism.<br />
<br />
<b>Write things down.</b> Rich is big on this. There are several benefits: it helps you think thoroughly, it helps you remember things, it is easy to skim for recall.<br />
<br />
<b>Gently keep your mind focused.</b> Cleese says to be successful you must keep your mind gently around the problem. You may wander off, but gently come back to it. Rich uses hammock time not just to relax, but to recall information. Touch each fact with your mind to keep it fresh, and to make it interesting to your background mind.<br />
<br />
<b>Have a dogged persistence.</b> Cleese sticks with an problem, and doesn't just take the first idea he comes up with. Sometimes a creative breakthrough requires persisting through the discomfort, even slight anxiety, of an unsolved problem. Rich reminds us that since this is an indirect process it may take days, months, or years for a solution to come.<br />
<br />
<h3>Anti-techniques</h3><br />
How can you destroy creativity? Easy:<br />
<br />
<b>Chase success.</b> Paul Graham says the way to destroy your creativity is to make money the top idea in your mind. It tends to consume all your mental energies. Blow also warns about thinking about success or how others will judge what you do. These things can easily lead to fear, and as Cleese says you need to feel confident to be able to generate ideas.<br />
<br />
<b>Obsess about disputes.</b> Paul Graham talks about how Isaac Newton got involved in disputes and regretted the wasted energy. This is really just another form of worrying about what other people think.<br />
<br />
<b>Make a schedule.</b> Blow warns about making a schedule, but also admits that we must all deal with schedules. Rich says his techniques don't work under pressure. While Cleese sets aside time to be creative, he recognizes that the process is unpredictable and needs time.<br />
<br />
<b>Pre-judge ideas.</b> You must be open, Cleese doesn't call it "open mode" for nothing. Brainstorming forbids judging ideas, and as a technique it gets that much right. George Land found the more we use divergent and convergent thinking together—in other words the more we try to pre-judge ideas—the less creative we will be.<br />
<br />
<b>Get distracted.</b> Cleese says you need to create a space free from distractions. For Blow, even the threat of a distraction can prevent him from relaxing, so he'll even spend time at a coffee shop for a few hours before heading into the office.<br />
<br />
<b>No humor.</b> According to Cleese humor is about two frameworks coming together to make new meaning, and this is also the core of creativity. If you eliminate humor, then you eliminate creativity.<br />
<br />
<b>Live actively and urgently.</b> If you want to ensure no relaxation happens, if you want to ensure that you are in closed mode, then live urgently and actively.<br />
<br />
<h3>Conclusion</h3><br />
If you're here you are probably a computer programmer (most likely a Clojure programmer). That means you're probably a bit like me. You're good at thinking analytically and logically. You're good a judging solutions based on correctness, performance, etc. You're good at operating in "closed mode." These are great skills, and as Cleese says we need both open and closed mode to succeed, open to generate ideas, and closed to execute on them. We just may need to work on the open mode a bit.<br />
<br />
You have the ability to be creative. You have a relaxed, curious, playful, imaginative self. There are some techniques that others have used that may help you access your creativity. They may help you, they may not. You may need to experiment a bit for yourself.<br />
<br />
You cannot fully control this process. You can only indirectly stimulate creativity, and you cannot guarantee that your mind will solve the problem you want it to solve. One approach would be to work on several problems at once! You may also find some fruitful connections between the problems.<br />
<br />
To be creative you must be persistent, and you must practice. I hope this helps you find those imaginative solutions.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-70533547218694267712018-05-21T09:00:00.001-04:002018-05-21T15:23:32.313-04:00GardeningIf you're like me, you spend 8+ hours a day in front of a screen. About five years ago, I decided that I needed better hobbies than learning new programming languages and writing code for personal projects. I wanted find ways to learn new skills and connect with people. I've done that by playing board games at local meetups and building a robot, and I've done that with gardening.<br />
<br />
Gardening has been incredibly frustrating and incredibly rewarding in a roller-coastery kind of way. I'd like to share my journey with you in an attempt to get you interested in gardening. I'll share some resources I've found interesting and useful.<br />
<br />
<h3>Why gardening?</h3><br />
I chose gardening for many different reasons. I remember my parents having a garden when I was a kid, and I wanted to have a hobby that my kids could be involved in and excited about. I like to eat things like tomatoes that my wife does not often buy, because no one else (including her) likes them. I wanted to do something outdoors. I wanted to become a little more self-sufficient.<br />
<br />
Those are some of my reasons, but maybe you have other reasons. Maybe you'd like to reduce your carbon footprint by producing your own food that doesn't get shipped half way across the world. Maybe you like the idea that food from your garden is essentially tax-free income. Maybe you want to increase the diversity in your diet and/or help preserve and conserve heirloom food varieties that are endangered. Maybe you don't want to grow food but flowers providing you with a vibrant, delicate beauty.<br />
<br />
<h3>How gardening?</h3><br />
There are many ways to garden from containers to raised beds. One of the things I enjoy about gardening is an entire world of new things to learn. It is a gateway hobby into things like cooking, canning, composting, carpentry, and other words that begin with 'c'.<br />
<br />
I have focused mostly on fruits and veggies, since I want to be able to eat from my garden, but I've also grown (and grow more and more) flowers. I've grown some edible flowers and some inedible. It is incredibly satisfying to have some color around the house.<br />
<br />
I started small with some containers on my deck. I used a couple of EarthBoxes, then built my own DIY EarthBoxes. I like the sub-irrigated planter (SIP) concept so much that I'm planning on putting in a raised bed SIP in my backyard, automatically fed by rain barrels. If you want to learn more about SIPs, check out <a href="https://www.youtube.com/albopepper">https://www.youtube.com/albopepper</a>.<br />
<br />
Gardening (like most hobbies) can be as expensive as you let it. You can buy all kinds of gardening gadgets and gizmos. One of my goals is to make gardening as economical as possible. To garden you need:<br />
<br />
<ol><li>Plants</li>
<li>Sun</li>
<li>Water</li>
<li>Nutrients</li>
</ol><br />
The sun part is pretty easy, since my back yard is south facing. I just need to work around the shadows cast by trees and the deck.<br />
<br />
You can buy seeds pretty cheaply, but you can also harvest seeds from your plants, so you don't have to continually buy seed packets. This will only work with open-pollinated (OP) plants. Check out this video to learn about OPs, hybrids, and heirlooms: <a href="https://www.youtube.com/watch?v=zkMEmkecSHs">https://www.youtube.com/watch?v=zkMEmkecSHs</a> Often, it is easier to buy seedlings at a nursery or farmer's market.<br />
<br />
You can also plant perennials like strawberries and asparagus. These don't need to be replanted every year. You plant them once and you can harvest for years. <br />
<br />
You can obviously water your plants with your tap, but rain barrels are a way save money taking advantage of an abundant resource over our heads. You can buy rain barrels, or you can make your own. My water company even gives a $30 rebate each for up to two rain barrels that I install.<br />
<br />
Plants need nutrients, and nutrients can be provided by fertilizer. I still use fertilizer occasionally, but I've opted to make my own compost. Unfortunately I don't have many trees whose leaves I can compost. This is usually the easiest way to make compost. However, I am composting what leaves I have along with grass clippings and cardboard boxes from all my Amazon Prime orders. I compost trimmings from my garden, and kitchen waste. I'm even thinking about getting some composting worms! Here is a video about how ridiculously easy it is to compost: <a href="https://www.youtube.com/watch?v=n9OhxKlrWwc">https://www.youtube.com/watch?v=n9OhxKlrWwc</a><br />
<br />
<h3>Lessons Learned</h3><br />
I've been gardening about five years, and here are some lessons I've learned.<br />
<h4>Time and timeliness.</h4>As a software engineer, I work in a field where I'm constantly learning, and there's a new JavaScript framework every week. I enjoy being more aware of the weather and seasonal rhythms. Plants work on a different timescale. If something goes wrong with the crop this year, I may have to wait another whole year to try again. That can be frustrating, but it can also be an opportunity both to think over a longer timescale and to be very focused on what is happening right now because the stakes are high.<br />
<h4>Everything wants to kill your plants.</h4>In container gardening on my deck I've dealt mostly with insects, and there are billions of them. When I moved into raised bed gardening with my strawberry patch, I had to deal with deer eating all the leaves off my strawberries. For the past couple of years it has been impossible for me to grow zucchini or squash, because vine borers have eaten them from the inside out. I'm not necessarily a fan of squishing bugs, but there was nothing more satisfying than digging those buggers out and squishing their fat bodies. It was a kind of anger management program.<br />
<br />
The lesson is you need to think about pest management from the beginning. Talk to your neighbors about what pests they've dealt with in their gardens. Or at least be prepared that the first year could be rough until you know what you up against. When you do know what you're up against...research! If you live in the US look up your local cooperative extension website. Virginia's has all kinds of great publications for growing things in my region.<br />
<h4>Your plants want to live</h4>Even the sun can sometimes be brutal on your plants. I tried seed starting a couple of years ago. The last step is to "harden off " your plants by gently exposing them to the elements. I was a little less than gentle and nearly killed my plants.<br />
<br />
After the hardening off incident I felt like a bad plant daddy, but the amazing thing was my plants came back. They want to live. They are partners in this gardening adventure.<br />
<h4>It is satisfying to make things grow</h4>It can sometimes be difficult to diagnose what is wrong with a plant: is it overwatered, underwatered, missing some nutrient, etc? Plants are complicated yet fascinating living things. It is worth the effort to understand them and work with them. One of the most fascinating books I've read is <em>Botany for Gardeners</em> by Brian Capon <a href="http://a.co/7SSM1Wi">http://a.co/7SSM1Wi</a>. I really enjoyed Brian's writing style, and it is a very approachable introduction to cellular function, propagation, and the fascinating life of plants.<br />
<br />
In the end there is a lot to learn, and it is hard work, but it is so satisfying to nurture a living thing.<br />
<h4>It is satisfying to work hard</h4>I have a personal rule for myself that as much as possible I will refuse to have someone else mow my lawn. It saves money. I listen to podcasts and audio books. I like to walk around my house and property (only 1/3 acre but still) and see how things are doing. It can be hard work since my yard is mostly a hill, but I like to get the exercise.<br />
<br />
Gardening can be hard work, too. One Sunday afternoon, in addition to mowing and edging, I pulled out two bushes (which if you've ever done, then you know), and planted an apple tree and six red raspberry canes. I was sunburnt and sore, and paid for it the next day, but it was satisfying, and I'm looking forward to the fruit of my labor (literally!).<br />
<h4>Play the odds</h4>I recommend starting small, because like any hobby you can get excited and spend a lot of money before you realize it. However, you also have to know that gardening is about playing the odds, so don't start too small. When you start seeds, you put three in each hole, and when they sprout you thin them down to just the strongest of the seedlings. If you buy tomato seedlings from a nursery, don't just buy one, buy two or three. You have to expect that some plants won't survive.<br />
<br />
It can also be helpful to plant more than one kind of thing. You may not get everything you want, but you should plant a diverse mix of plants and enjoy whatever you get. If you only plant cucumbers, then horde of cucumber beetles can destroy everything, but if you also have tomatoes, then it's not a total wash.<br />
<br />
<h3>Conclusion</h3><br />
Have I accomplished my goal of learning new skills and getting to know people? Absolutely! Of the five houses that border mine three are gardeners, and when I'm out early in the morning tending my garden my neighbors are often out, too. I've had chances to get to know them.<br />
<br />
I've gotten outdoors. I've gotten plenty of exercise. My kids are involved and excited about gardening. They even eat things they normally wouldn't, because we've grown them ourselves.<br />
<br />
If you want a hobby to get you away from the screen and doing something physical in the real world, then give gardening a go.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-81544068026263137602017-09-07T09:24:00.002-04:002017-09-07T09:24:59.328-04:00The Ethics of Software QualitySecurity professionals are in a hard place. If there is a security breach, they take the fall. However, if they do their job right, no one notices. Further, they may even meet resistance to doing their job right because they are being overly cautious, taking too much time, costing too much money, etc., etc.<br />
<br />
I think a software professional who wants to create quality software faces the same challenge. You may deliver quality software, but then get accused of taking too long (according to some arbitrary idea someone has) or "gold plating." You get compared to co-workers who write code much faster, even though it may have more bugs. Focusing on speed as a primary metric for software development is a race to the bottom.<br />
<br />
This is not to say that there aren't times when something needs to be timeboxed, or a programmer needs to resist "gold plating." It is possible to fall into a trap of tweaking and refactoring <i>ad infinitum</i>. However, I don't find that there is a bright line or objective standard for judging this. Maybe that is because I believe software development to be a creative, exploratory process, so I'm apt to think there's more than a little taste and discernment.<br />
<br />
To produce quality software you must take an ethical approach. What do I mean by this? While it seems obvious that there are ethical issues in software development---for example poor quality software wastes time and money, causes frustration, and in the extreme case can cause damage to property and loss of life---that's not what I mean.<br />
<br />
What I mean by "ethical approach" (and maybe there's a better term for it) is you must have an intrinsic motivation to create quality software. You have to do it because "it's the right thing." You will rarely get support from managers to produce quality software. You will shoulder the blame for quality issues in your code. If your code is beautiful and functional and bug-free, rarely will anyone even notice, let alone commend you.<br />
<br />
How can you develop a "software quality conscience"? I don't have all the answers, but I have a couple of suggestions:<br />
<ol>
<li><b>Read good code and read about good code.</b> If it is garbage in, then it will be garbage out.</li>
<li><b>Surround yourself by other people who care about quality.</b> Find a team of like minded people whether it is at work or not.</li>
<li><b>Keep things in perspective.</b> I find, as I'm further into my career, that I've had bosses bluster at me to get things done by a certain time ("do or die"), and found that it didn't really have a huge impact on the success or failure of my project or company. Don't be insubordinate or lazy, but don't buy into the hype. Be realistic.</li>
</ol>
You are responsible for fighting the good fight. So step up. Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-2676190026871185502017-02-27T08:58:00.001-05:002017-02-28T06:44:43.145-05:00Continuous Planning"In preparing for battle I have always found that plans are useless, but planning is indispensable." -- Dwight D. Eisenhower<br />
<br />
There is a tension between engineering on the one hand, and on the other hand those who would like to know when the task will be done. A product must be marketed, documented, sold, and supported. "When it's done," is useless when you're trying to sell to a customer against a market full of competitors. However, the software we write gets more complex each day, and the process for bringing it to life is complex. Complexity means unknowns, and unknowns mean uncertainty. A software project is like a hurricane with a cone of uncertainty preceding it. This tension between the desire to know and the reality of uncertainty is a fundamental part of working a software project (and probably other kinds of projects).<br />
<br />
Before going too much farther I will state my assumption: a completion date is an output not an input, and the most effective tool for managing a completion date is changing the amount of work you want to do (i.e. "scope").<br />
<br />
You cannot take a date and work backwards. This is no different than taking a date and working forwards. Well actually, there is a big difference. In working forward, you can always push the completion date out. In working backwards you cannot start any earlier than now. The completion date inevitably follows from when you start, how quickly you can work, and how much you are trying to do.<br />
<br />
You can spend money on tools, training, consultants, but these each have a time cost.<br />
<br />
You can add more people, but in order to establish a context on a project a new person must learn a code base, tools, technologies, personalities of the team, and to do so he or she must take time from an otherwise productive member of the team.<br />
<br />
You can have the current team work overtime, but too much of that will cause quality issues and burnout.<br />
<br />
You can relax expectations about quality, but that is just trading your future time to get something done more quickly and temporarily.<br />
<br />
The best thing you can do to manage a completion date, is to cut the amount of "stuff" you are trying to do, or to rearrange the order of when you will do it, so you get the things you want earlier than you otherwise would have.<br />
<br />
Given that a date is an output, as engineers and managers we try to navigate this tension between the desire to know and the reality of uncertainty with planning, but there's a problem with plans: they're useless. Imagine planning a single task. When will it be done? Well, if you ask one engineer she will give you an estimate based on her skill and experience. If the task is ever given to another engineer, then that estimate is invalidated. On top of that, an engineer (or human really) is notorious for estimating only the amount of work she must do. She doesn't think about QA testing, deployment, and data migrations, among other things. Nor does she think to factor in overhead like meetings, filling out time cards, learning new skills, bonding with coworkers, etc.<br />
<br />
That is just at the most atomic level of estimation. Once you start to think about collaboration things get more complex. Does our engineer need to get a review from a coworker? That coworker is now being taken off of his task to do the review, which can lead to delays. What if our engineer needs assistance from someone more familiar with a particular technology or part of the code base? What if our engineer wants to brainstorm with another engineer? If our engineer gets delayed then any tasks that were dependent on her task also get delayed.<br />
<br />
Now imagine making a plan for a product that spans several teams and tens or hundreds (or thousands??) of people. If you don't know everything that everyone is working on and how they are all related, then you can't with certainty plan out anything, and that is only taking into account everything that can be known, there are still unknowns (like if someone may get sick). This is the uselessness of a plan.<br />
<br />
Well, a plan is not entirely useless. It is probably very accurate for the tasks that will be started in a few days, but entirely inaccurate for the tasks that will be started in three or six months.<br />
<br />
So, there are two problems with a plan: 1) it must be updated to reflect new information, and 2) it fails to take into account the "cone of uncertainty."<br />
<br />
Updating a plan seems easy enough, however the larger the plan the more work it will take to keep updated. One could certainly employ an army of project managers who verify that the task breakdown, estimates, and dependencies have not changed; that you've taken into account every meeting, vacation plan, all the testing, deployment, and overhead. Ideally the plan would be updated continuously (so its more of a "dashboard" than a "plan"). More valuable than knowing that three months ago we thought a task should be complete on such and such a date would be knowing when we think it will be completed as of now with all the latest information we have access to, but that would create quite a drag on the entire team.<br />
<br />
Even if you could keep the plan up-to-date, it gives the false impression that one can know precisely when a task will be complete. You may be able to predict the completion date of a task that starts tomorrow, but not for a task that will start in three months. Three months provides plenty of time for both knowns and unknowns to change when the task could even start, let alone when it would complete.<br />
<br />
Usually, this uncertainty is handled by "padding" the date, but this is not enough. A single point-in-time completion date conveys certainty, and this is certainly wrong. The completion of a task should always be a range, one that is narrow for the near future and wide for the distant future. <br />
<br />
Incidentally, I think even agile burndown charts get this wrong. In my opinion, there is (and should be) variability to a team's velocity. Simply taking some velocity value and running it out a few months to predict a single point in time when a task will be complete is at odds with reality.<br />
<br />
What does Continuous Planning look like? Well I don't really know, because I just made it up! At a high level I would summarize it as: plan using real data, with task completion <i>ranges</i>, over as long a term as you want, in aggregate, on average, continuously. The task completion ranges are the key. You can plan over as long a term as you want, however, the ranges will get wider. If you can reduce the variability in your process---and prove it with your data---then you can narrow the ranges. Planning is done in aggregate and on average, because it is impossible to know and manage every possible factor, so we must abstract away much of the minutiae. Finally, to plan continuously implies some kind of tool to facilitate.<br />
<br />
To the extent that I have thought about how this would work out practically, this is what I would do:<br />
<br />
Each team would estimate their tasks by each member recording the number of hours he or she thinks it would take him or her to complete the task, given that there are no other distractions. This is a kind of pure estimation that engineers usually make. It would be helpful to discuss the task as a team, and try to elicit different opinions on the complexity of the task, so the estimates will be as complete as possible.<br />
<br />
Why not use story points? I have been a fan of story points precisely because they abstract away hours. Hours can vary depending on a persons skill and experience. Hours can get lengthened by interruptions and discovering additional complexity. Hours give a false impression that they would map directly to calendar time, and you can accurately predict when a task will complete.<br />
<br />
However, the first thing a person asks is how long does X points take. Usually you have to pick as a standard comparison some "golden story" for a certain number of points. People will usually consciously or unconsciously come up with some rule of thumb like, "an eight point story should take about a sprint to complete." So in the end you are estimating in hours, but they're a convoluted form of hours.<br />
<br />
Hours are a natural unit for estimations. The danger in using hours is actually trusting the estimate for a precise completion date. We've already rejected precise completion dates with Continuous Planning, and the rest of the process is designed around (automatically) finding an accurate scale with which to judge these estimates. I would actually advocate that the estimates and velocities be hidden variables, and (other than your own estimate) you only see the completion range for a task. This would hopefully reduce some confused expectations around what it means for an estimate to be denominated in hours.<br />
<br />
The estimates from each team member would be combined together into a single estimate for the task. The method for combination could be taking an average. It could involve throwing out extreme values first, or doing some sophisticated statistical analysis.<br />
<br />
Having done these estimates, a task tracking system would keep track of when tasks started and when they completed, or how long they've been started even if not complete. This actual data can be used to calculate a velocity. The velocity could be calculated at several levels. You could calculate the velocity for an individual task, for a particular team member, for the team as a whole. You could even calculate the velocity for a feature epic cutting across several teams.<br />
<br />
To calculate a date range for completion, you can take the average plus or minus a standard deviation for a single velocity calculation over time and get an optimistic and pessimistic velocity. You could get an optimistic and pessimistic velocity by taking the minimum and maximum of the most recent velocity calculations at two different level (task and team, for example). I'm not sure which would work best; it warrants some research.<br />
<br />
Tasks would be a hierarchical tree. An epic is really just a task with subtasks. The velocities and estimates can flow up the tree for the purposes of calculating estimated completion ranges for epics.<br />
<br />
If you wanted to get fancy, you could draw dependencies between tasks, and the system could then attempt some kind of topological sort of the tasks, and using a prioritized backlog and team assignments to each task, construct a plan for what could be done in parallel, and---based on velocities calculated from real data--calculate a completion range for each task, epic, and the project as a whole.<br />
<br />
As you can see there are still many questions to be answered. I think this is an idea worth exploring. In my experience, the usual tools fall flat at resolving the tension between the desire to know and the reality of uncertainty.<br />
<br />
To effectively attack this tension requires abstracting away much of the minutiae of detailed planning by embracing the variability of the process. The plan is useless. Planning is indispensable. Therefore, plan continuously<br />
<br />
<div style="left: -99999px; position: absolute;">
In preparing for battle I have always found that plans are useless, but planning is indispensable.</div>
<div style="left: -99999px; position: absolute;">
</div>
<div style="left: -99999px; position: absolute;">
</div>
<div style="left: -99999px; position: absolute;">
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
<div style="left: -99999px; position: absolute;">
In preparing for battle I have always found that plans are useless, but planning is indispensable.<br />
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
<div style="left: -99999px; position: absolute;">
In preparing for battle I have always found that plans are useless, but planning is indispensable.<br />
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
<div style="left: -99999px; position: absolute;">
In preparing for battle I have always found that plans are useless, but planning is indispensable.<br />
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
<div style="left: -99999px; position: absolute;">
In preparing for battle I have always found that plans are useless, but planning is indispensable.<br />
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
<div style="left: -99999px; position: absolute;">
n preparing for battle I have always found that plans are useless, but planning is indispensable.<br />
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
<div style="left: -99999px; position: absolute;">
n preparing for battle I have always found that plans are useless, but planning is indispensable.<br />
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
<div style="left: -99999px; position: absolute;">
n preparing for battle I have always found that plans are useless, but planning is indispensable.<br />
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html</div>
Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com2tag:blogger.com,1999:blog-8086052983184217440.post-7962078854140040722016-08-30T14:01:00.000-04:002016-09-03T06:20:13.894-04:00"Clojure Polymorphism" Released!From my new blog <a href="http://realworldclojure.wordpress.com/">Real World Clojure</a>. What am I doing with this new blog? I have no idea, but you can follow along.<br />
<br />
~ ~ ~ ~<br />
<br />
I have released a short e-book (30 pages) titled "Clojure Polymorphism." You can get 50% off by using this coupon link <a data-mce-href="http://www.leanpub.com/clojurepolymorphism/c/ONeJZ629Isy7" href="http://www.leanpub.com/clojurepolymorphism/c/ONeJZ629Isy7">http://www.leanpub.com/clojurepolymorphism/c/ONeJZ629Isy7</a>.<br />
What is this book about?<br />
<blockquote>
When it comes to Clojure there are many tutorials, websites, and books about how to get started (language syntax, set up a project, configure your IDE, etc.). There are also many tutorials, websites, and books about how language features work (protocols, transducers, core.async). There are precious few tutorials, websites, and books about when and how to use Clojure's features.<br />
<br />
<br />
<section class="about-book-copy"><div class="trimmed expanded">
<div class="cms-content">
This is a comparative architecture class. I assume you are familiar with Clojure and even a bit proficient at it. I will pick a theme and talk about the tools Clojure provides in that theme. I will use some example problems, solve them with different tools, and then pick them apart for what is good and what is bad. There will not be one right answer. There will be principles that apply in certain contexts.<br />
I this installment, I will pick up the theme of "Polymorphism" looking at the tools of polymorphism that Clojure provides. Then I take a couple of problems and solve them several ways. At the end of it all, we look back at the implementations and extract principles. The end goal is for you to develop an understanding of tradeoffs and a taste for good Clojure design.</div>
</div>
</section></blockquote>
<br />
<br />
<section class="about-book-copy">I have some ideas for other e-books. Perhaps a concurrency tour of Clojure taking a look at futures, STM, reducers, core.async, etc. Or maybe talk about identity by looking at <code>atom</code>, <code>agent</code>, <code>ref</code>, <code>volatile!</code>, etc. Or maybe look at code quality tools. Or how to organize namespaces. Or adding a new data structure with <code>deftype</code>?</section><section class="about-book-copy"><br data-mce-bogus="1" /></section>What would you like to see? <a data-mce-href="https://realworldclojure.wordpress.com/contact/" href="https://realworldclojure.wordpress.com/contact/">Contact</a> me. :)Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-31971786195951437972016-08-19T13:15:00.001-04:002016-09-03T06:20:26.785-04:00Reducible StreamsLaziness is a great tool, but there are some gotchas. The classic:<br />
<code></code><br />
<pre><code>(with-open [f (io/reader (io/file some-file))]
(line-seq f))
</code></pre>
<br />
<code>line-seq</code> will return a lazy seq of lines read from <code>some-file</code>, but if the lazy seq escapes the dynamic extent of <code>with-open</code>, then you will get an exception:<br />
<code></code><br />
<pre><code>IOException Stream closed java.io.BufferedReader.ensureOpen (BufferedReader.java:115)
</code></pre>
<br />
With laziness, the callee produces data, but the caller can control when data is produced. However, sometimes the data that is produced has associated resources that must be managed. Leaving the caller in control of when data is produced means the caller must know about and manage the related resources. Using a lazy sequence is like co-routines passing control back and forth between the caller and callee, but it only transfers control for each item, there is no way to run a cleanup routine after the caller has decided to stop consuming the sequence.<br />
<br />
<h3>
A Tempting Solution</h3>
One might immediately think about putting the resource control into the lazy seq:<br />
<code></code><br />
<pre><code>(defn my-line-seq* [rdr [line & lines]]
(if line
(cons line (lazy-seq (my-line-seq* rdr lines)))
(do (.close rdr)
nil)))
(defn my-line-seq [some-file]
(let [rdr (io/reader (io/file some-file))
lines (line-seq rdr)]
(my-line-seq* rdr lines)))
</code></pre>
<br />
This way the caller can consume the sequence how it wants, but the callee remains in control of the resources. The problem with this approach is the caller is not guaranteed to fully consume the sequence, and unless the caller fully consumes the sequence the file reader will never get closed.<br />
<br />
<h3>
An Actual Solution</h3>
There is a way to fix this. You can require the caller to pass in a function to consume the generated data, then the callee can manage the resource and execute the function. It might look something like:<br />
<code></code><br />
<pre><code>(defn process-the-file [some-file some-fn]
(with-open [f (io/reader (io/file some-file))]
(doall (some-fn (line-seq f)))))
(process-the-file my-file-name do-the-things)
</code></pre>
<br />
Once upon a time clojure.java.jdbc used to have a <code>with-query-results</code> macro that would expose a lazy seq of query results, and you had these resource management issues. Then it was changed to use this second approach where you pass in functions.<br />
<br />
There is a hitch to this approach. Now the callee has to know more about how the caller's logic works. For instance, in the above code you are assuming that <code>some-fn</code> returns a sequence that you can pass to <code>doall</code>, but what if <code>some-fn</code> reduces the sequence of lines down to a scalar value? Perhaps <code>process-the-file</code> could take two functions <code>seq-fn</code> and <code>item-fn</code>:<br />
<code></code><br />
<pre><code>(defn process-the-file [some-file item-fn seq-fn]
(with-open [f (io/reader (io/file some-file))]
(seq-fn (map item-fn (line-seq f)))))
(process-the-file my-file-name do-a-thing identity)
</code></pre>
<br />
That's better? I still see two problems:<br />
<ol>
<li>The caller is back to having to know/worry about resource management, because it could pass a <code>seq-fn</code> that does not fully realize the lazy seq before it escapes the <code>with-open</code></li>
<li>The logic hooks that <code>process-the-file</code> provides may never be quite right. What about a hook for when the file is open? How about when it is closed?</li>
</ol>
I could argue that this whole situation is worse, since the caller still has to worry about resource management, and now the callee has this additional burden of trying to predict all of the logic hooks the caller might want.<br />
<br />
An additional design consequence is that you are inverting control from what it was in the lazy seq case. Whereas before the caller had control over when the data is consumed, now the callee does. You have to break your logic up into small chunks that can be passed into <code>process-the-file</code>, which can make the code a bit harder to follow, and you must put your sharded logic close to the callsite for <code>process-the-file</code> (i.e. you cannot take a lazy sequence from <code>process-the-file</code> and pass it to another part of your code for processing). There are advantages and disadvantages to this consequence, so it is not necessarily bad, it is just something you have to consider.<br />
<br />
<h3>
Another Solution</h3>
We can also solve this by using a different mechanism in Clojure: reduction. Normally you would think of the reduction process as taking a collection and producing a scalar value:<br />
<code></code><br />
<pre><code>(defn process-the-file [some-file some-fn]
(with-open [f (io/reader (io/file some-file))]
(reduce (fn [a v] (conj a (somefn v)) [] (line-seq f))))
(process-the-file my-file-name do-a-thing)
</code></pre>
<br />
While this may look very similar to our first attempt, we have some options for improving it. Ideally we'd like to push the resource management into the reduction process and pull the logic out. We can do this by reifying a couple of Clojure interfaces, and by taking advantage of transducers.<br />
<br />
If we can wrap a stream in an object that is reducible, then it can manage its own resources. The reduction process puts the collection in control of how it is reduced, so it can clean up resources even in the case of early termination. When we also make use of transducers, we can keep our logic together as a single transformation pipeline, but pass the logic into the reduction process.<br />
<br />
I have created a library called <a href="https://github.com/pjstadig/reducible-stream/">pjstadig/reducible-stream</a>, which will create this wrapper object around a stream. There are several functions that will fuse an input stream, a decoding process, and resource management into an reducible object. Let's take a look at them:<br />
<ul>
<li><code>decode-lines!</code> will take an input stream and produce a reducible collection of the lines from that stream.</li>
<li><code>decode-edn!</code> will take an input stream and produce a reducible collection of the objects read from that stream (using clojure.edn/read).</li>
<li><code>decode-clojure!</code> will take an input stream and produce a reducible collection of the objects read from that stream (using clojure.core/read).</li>
<li><code>decode-transit!</code> will take an input stream and produce a reducible collection of the objects read from that stream.</li>
</ul>
Finally, there is a <code>decode!</code> function that encapsulates the general abstraction, and can be used for some other kind of decoding process. Here is an example of the use of <code>decode-lines!</code>:<br />
<code></code><br />
<pre><code>(into []
(comp (filter (comp odd? count))
(take-while (complement #(string/starts-with? % "1"))))
(decode-lines! (io/input-stream (io/file "/etc/hosts"))))
</code></pre>
<br />
This code will parse <code>/etc/hosts</code> into lines keeping only lines with an odd number of characters until it finds a line that starts with the number '1'. Whether the process consumes the entire file or not, the input stream will be closed.<br />
<br />
Advantages:<br />
<ul>
<li>This reducible object can be created and passed around to other bits of code until it is ready to be consumed.</li>
<li>When the object is consumed either partially or fully the related resources will be cleaned up.</li>
<li>Logic can be defined separately and in total (as a transducer), and can be applied to other sources like channels, collection, etc..</li>
</ul>
Disadvantages:<br />
<ul>
<li>This object can only be consumed once. If you try to consume it again, you will get an exception because the stream is already closed.</li>
<li>If you treat this object like a sequence, it will fully consume the input stream and fully realize the decoded data in memory. In certain uses cases this may be an acceptable tradeoff for having the resources automatically managed for you.</li>
</ul>
<h3>
Summary</h3>
Clojure affords you several different tools for deciding how to construct your logic and manage resources when you are processing collections. Laziness is one tool and it has advantages and disadvantages. It's main disadvantage is around managing resources.<br />
<br />
By making use of transducers and the reduction process in a smart way, we can produce an object that can manage its own resources while also allowing collection processing logic to be defined externally. The library <a href="https://github.com/pjstadig/reducible-stream">pjstadig/reducible-stream</a> provides a way to construct these reducible wrappers with decoding and resource management fused to a stream.<br />
<br />
<h3>
Acknowledgments</h3>
<br />
Special hat tip to <a href="https://twitter.com/hiredman_">hiredman</a>. His <a href="https://ce2144dc-f7c9-4f54-8fb6-7321a4c318db.s3.amazonaws.com/reducers.html">treatise</a> on reducers is well worth the read. Many moons ago it got me started thinking about these things, and I think with transducers on the scene, the idea of a collection managing its own resources during reduction is even more interesting.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-91830058190713473742016-05-09T09:07:00.000-04:002016-05-09T09:07:51.723-04:00The March of Technology<blockquote>"Our inventions are wont to be pretty toys, which distract our attention from serious things. They are but improved means to an unimproved end, an end which it was already but too easy to arrive at; as railroads lead to Boston or New York. We are in great haste to construct a magnetic telegraph from Maine to Texas; but Maine and Texas, it may be, have nothing important to communicate. Either is in such a predicament as the man who was earnest to be introduced to a distinguished deaf woman, but when he was presented, and one end of her ear trumpet was put into his hand, had nothing to say. As if the main object were to talk fast and not to talk sensibly. We are eager to tunnel under the Atlantic and bring the Old World some weeks nearer to the New; but perchance the first news that will leak through into the broad, flapping American ear will be that the Princess Adelaide has the whooping cough. After all, the man whose horse trots a mile in a minute does not carry the most important messages; he is not an evangelist, nor does he come round eating locusts and wild honey. I doubt if Flying Childers ever carried a peck of corn to mill."</blockquote>Thoreau, Henry David. <i>Walden, and on the Duty of Civil Disobedience.</i> Project Gutenberg. Web. 09 May 2016. https://www.gutenberg.org/<br />
<br />
Or in the words of a more modern philosopher and poet:<br />
<br />
<img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAD27ShgkTJjFCiwxyDkdiRv_zoTjdygdTa8MaZGHhSdEdPSXu1A4mtjPq0PuM7x8_rA_cjEVFrjiklE8H6LrNGxWTR-3td1KePNmSLSW4DU8GM8GshnI5AUXMj9Fg6FfAJUQrdmUrpAw/s1600/anigif_optimized-26314-1433990758-8.gif"/>Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-21594842581075065942016-03-05T15:31:00.001-05:002016-03-07T14:21:06.263-05:00Making Fake ThingsSoftware is fake. There are bits inside a computer represented by a magnetic or electrical charge or mechanical potential or some such thing. But software is not an electrical charge. Electrical charges can represent ones and zeroes and a series of ones and zeroes like "10111101" can represent the JVM opcode "anewarray" or the fraction one-half ("½") in the ISO-8859-1 character encoding or the number -67 in twos-complement. Software is not electrical charges, it is a particular interpretation imposed on electrical charges. An interpretation does not weigh anything. It has no color, taste, temperature, volume, mass, or any physical features. It is fake, but fake things can be useful.<br />
<br />
Fake things can represent real things (or other fake things). For example, you can represent a couch with a 3D model in a computer. You can represent cities and towns and roads with fake things. You can also represent fake things with other fake things. JVM opcodes, characters, and numbers are all fake things represented by "10111101", which is fake. Fake things are useful because they can represent real and fake things in a way that can be cheaply manipulated and transported instantly across the world. Fake things also have challenges.<br />
<br />
Software is a little unique even among fake things because in making software we are often making something that has never existed before. When someone creates a stove there are hundreds of thousands of others stoves in existence to draw upon. There are wood stoves, electric stoves, and gas stoves. But when someone created a text editor, they created something that had never existed before. Here is how Richard Gabriel describes it:<br />
<br />
<blockquote>"But, consider the first people to design and build a text editor. Before that, there was never a text editor. Changes to a manuscript were always made by retyping or retypesetting. How would people want to make textual changes? How would people want to navigate? Searching? - no one ever heard of that before. Systematic changes? Huh? By the way, there were no display terminals, so how do you even look at the manuscript?" -- <a href="http://dreamsongs.com/LessonsFromNothing.html">http://dreamsongs.com/LessonsFromNothing.html</a></blockquote><br />
Web applications, virtual currencies, automated theorem provers, and many other software applications had never existed before or were so different in nature from their physical counterparts that they were a unique thing. Making fake things is hard enough, but making things that have never existed before is that much harder. That's not the end of it, though.<br />
<br />
Fake things have no real world to help co-design them. Stoves have a real world to help co-design them. There are accessories that are used with stoves that help co-design them. Real things like pots and pans. Stoves have to fit through doorways, nestle between kitchen cabinets, and match the colors on the walls. Text editors have accessories like keyboards and mice that were invented to give real people made of meat a way of manipulating a conceptual world by proxy. Perhaps a mouse has to be compatible with a human hand, but a text editor has to be compatible with the mental model of a text editor that exists in a human mind, a model which no one had ever thought of before. Ultimately making software is a process of collaborating with other humans to dream up some mental model, and then making a fake thing out of software that other humans can use to manipulate that model (assuming they properly understand the mental model).<br />
<br />
Which reminds me, collaboration is also a fake thing. Collaboration is about using real things, like vibrating air, to push around fake things, like words. It is about using real things, like markers and whiteboards, to manipulate fake things, like ideas. All of these real things can be replaced by fake things, like video conferencing software and text editors. And fake things like words and ideas can be replaced by other fake things, and all of these fake things can be instantly transported, copied, and manipulated by real people in real (and very distant) places. Collaboration is not a real thing, it is a fake thing produced through the interaction of real people thinking creatively.<br />
<br />
And making software is a creative act. Writing software is writing instructions to make a computer do something. You must choose the instructions, determine their order, name things. You develop your own style. Writing software is writing words that have effect. Writing software is as close as you can get to God with words speaking reality into existence, the ultimate creative act. But writing software is not just for telling computers what to do. It is also collaboration with other humans. They must read, understand, modify, and extend what you write. They must understand your vision. You must collaborate with them through your source code.<br />
<br />
So, here we are. We have discovered that software is a fake thing, that it is often an entirely new thing, that it is a pure product of the mind, that it is born of collaboration, and it is creative expression. Now what? We must systematically question the constraints we place on ourselves, because those constraints are often meant for real things and our things are fake. Here are a few examples:<br />
<br />
A top-down management hierarchy is for making real things, not fake things. Top-down, command-and-control hierarchies are about control and efficiency. Control and efficiency are important for real things, because real things have locality, cost, and scarcity. Software has none of these things. Control and efficiency are important when you are manufacturing the same thing over and over. Software is often exploratory. Software is valuable not because we repetitively make lots of little copies of the same thing, but because we dream up some new way of doing things that has never been done before. Control and efficiency are important when you have a predictable process. A creative process is not predictable. You may think for hours about a problem, sleep on it, and then have the answer pop into your head the instant you wake up. We need to think differently, not just about what we make, but how we make it.<br />
<br />
Offices are about locality. An office puts materials, means of production, and managers in the same physical location. Yet with software there is no material and the means of production are mental. There is no reason to be concerned about locality. Ostensibly having a bunch of people in the same office enables them to collaborate, but collaboration is a fake thing. Collaboration does not exist in San Francisco or Saint Louis. It does not weigh 1kg. It is not blue. Having an office for collaboration is a rationalization that projects the past onto the future. Is collaboration different using video conferencing and Google Docs than it is using tables and chairs in an office? Yes, because fake things are different than real things. I do not recommend mixing fake things like video conferencing with real things like offices. It may take getting used to, but embracing the fakeness of collaboration has advantages like hiring people where they want to live instead of trying to convince them to live where you live. It also means having permanent, searchable, modifiable artifacts that can be shared instantly across the world, instead of a whiteboard in a room.<br />
<br />
Software can process data, but software is also data. This creates leverage. You can flip a bit, and that bit can flip ten others, and those ten another one hundred, etc. Compilers, build tools, continuous integration, and automated tests are all software doing things to software. "The cloud" has created a lot of leverage because it took something that was real (a machine) and made it fake (a "cloud instance"), and once it is fake it can be manipulated by software. The higher you can climb the mountain of abstraction the more powerful you will become. Before selling to Facebook, WhatsApp had ~450 million active users and ~55 employees. Yahoo has ~12,500 employees. I don't know how many active users they have, but let's just pretend it is ~450 million. Don't be Yahoo.<br />
<br />
These are just examples, and you can agree or disagree. My point is, we as an industry can achieve market success and realize our visions much more powerfully, but we must understand the nature of the software we are creating (it is fake), and the newness of what we are doing every day, and its collaborative nature, and the tools that we can take advantage of, and we must have the courage to give up on arbitrary constraints that are optimized for making real things. We must pursue leverage, because leverage will enable us to do amazing things.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com1tag:blogger.com,1999:blog-8086052983184217440.post-17023392696157139622014-05-30T14:20:00.000-04:002014-05-30T14:20:00.566-04:00The following is a quote from an astronaut speaking about a spacecraft<blockquote>"There's so much more elbow room in there compared to the Soyuz," he said. "Instead of just bringing two of your buddies, you can bring six ... It's got modern electronics, it's got modern materials in the heat shield. So technologically, it's a giant leap beyond the Soyuz."<br/><br/><a href="http://spaceflightnow.com/news/n1405/29dragonv2/">SpaceX reveals new-look passenger spacecraft</a></blockquote><br />
One could think of a number of snarky comments that could be made (especially given that Elon Musk is also the CEO of Tesla), but I'm kind of amazed that this is how human beings are speaking about spacecraft these days. <br />
<br />
Perhaps it should all be taken with a grain of salt since this quote was spoken at a publicity event by someone who works for SpaceX.<br />
<br />
Still...how long until I can take a vacation to Mars?Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-50115905793060602662014-05-10T09:20:00.001-04:002014-05-10T09:45:32.797-04:00This, But For Computer Programming<blockquote>If you meet a philosopher on a train and ask him his profession, he is likely to lie. It is not that philosophers are especially prone to lying, but rather that philosophy is a peculiar profession. To tell your fellow passenger that you are a philosopher opens up an awkward line of questioning. ... If you take the plunge, however, and accept the label of philosopher, you must be prepared for the disappointment when your listener hears that you don't live in a hut on a mountaintop, haven't uncovered the secret of life, and cannot explain why the world exists. If you are foolish enough to go further and attempt to describe your lifelong attempt to reconcile the epistemology of mathematics with its ontology, be prepared to encounter a look in which boredom and horror are blended equally. Best, therefore, to say simply that you are an architect, and leave it at that.<br />
<br />
— <em>A World Without Time</em>, Palle Yourgrau, page 164.</blockquote><br />
Some possible substitutions:<br />
<table style="border: 1px solid black"><tr><td>Philosopher</td><td>Computer Programmer</td></tr>
<tr><td>he/him</td><td>she/her</td></tr>
<tr><td>live in a hut on a mountaintop</td><td>live in your parents basement</td></tr>
<tr><td>uncovered the secret of life</td><td>know why my computer is running so slowly</td></tr>
<tr><td>explain why the world exists</td><td>install the driver for my printer</td></tr>
<tr><td>reconcile the epistemology of mathematics with its ontology</td><td>figure out the best cache invalidation strategy to provide a balance of performance and freshness</td></tr>
</table><br />
As a corollary, I wonder if there is a t-shirt for philosophers that is congruent with the "No, I will not fix your computer" t-shirt for computer professionals, something like "No, I will not fix your worldview."<br />
Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-10783019716942943732014-01-14T06:49:00.000-05:002014-01-14T06:49:30.291-05:00The Solution for Silicon Valley?<blockquote>"People—especially in the financial community—seem to assume that every industry works kind of like a web startup, and that all you need is two hot guys and $25,000 and you're a millionaire in six months. Heavy semi doesn't work like that. Heavy semi is like steel mills and railroads. By the time you can get a serious semi company self sustaining you're looking at a couple hundred million dollars of investment." — Ivan Godard of Out-of-the-Box Computing (<a href="https://www.youtube.com/watch?v=uotQn-jrAZU#t=599">https://www.youtube.com/watch?v=uotQn-jrAZU#t=599</a>)</blockquote><br />
It's a longer conversation to have, but I have opinions about Silicon Valley (<a href="http://thenextweb.com/entrepreneur/2011/07/13/the-problem-with-silicon-valley-is-itself/#!r9ZqL">that</a> I <a href="http://www.jwz.org/blog/2011/11/watch-a-vc-use-my-name-to-sell-a-con/">share</a> <a href="http://www.ritholtz.com/blog/2013/05/startups-are-not-disruptive-they-the-global-rich-get-richer/">with</a> <a href="http://allthingsd.com/20130521/bebo-founders-go-analog-with-exclusive-battery-club-in-san-francisco/">others</a>). However, it is nice to see people trying to create a real business and solve real problems instead of creating the next <a href="http://nonstartr.com/">Instagram clone</a>. The thing is that it takes hard work, many years, and the perspective to step back and work towards fundamental—not incremental—change.<br />
<br />
Best of luck to the OOTBC guys! True, a new CPU architecture isn't exactly solving problems of justice and social inequality, but at least it isn't a get rich quick scheme. Finding ways to fill in computation at neglected price points in the market is a stepping stone to solving other problems. Hopefully those problems aren't how to gamify haircuts or something. *sigh*<br />
<br />
Let's have some get-rich-slow startups that are truly innovating to solve difficult, fundamental problems.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-34457286570796427122014-01-08T06:19:00.001-05:002014-01-08T06:19:46.755-05:00The Buck Stops Here"Rufer explains the thinking behind the process: 'I was signing checks one day and I recalled the saying, "The buck stops here." I thought to myself, that isn't true. In front of me was a purchase order, a note that said the stuff had been shipped, we had received it, and that the price on the invoice matched the puchase order. A check had been prepared. Now, do I have the choice not to sign the check? Nope. So the question isn't where the buck stops, but where it starts—and it starts with the person who needs the equipment. I shouldn't have to review the purchase order, and the individual shouldn't have to get a manager's approval.'"<br />
<br />
"First Let's Fire All the Managers," Gary Hamel (<a href="https://archive.harvardbusiness.org/cla/web/pl/product.seam?c=573&i=15715&cs=7c855bfce2fd1c3860846954978b1181">https://archive.harvardbusiness.org/cla/web/pl/product.seam?c=573&i=15715&cs=7c855bfce2fd1c3860846954978b1181</a>)Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-57358715638589667072013-12-30T11:03:00.000-05:002018-07-08T16:54:25.787-04:00NixOS: A Field Report<h3>Why?</h3>NixOS is a relatively new Linux distribution based on the Nix package manager. A Nix package is hashed based on its content, dependencies, compiler , etc. and stored immutably in the Nix store. The Nix package manager uses this Nix store, a patched dynamic linker, and a buttload of symlinks to isolate installs. You can have multiple versions of applications and libraries living on the same machine without conflict.<br />
<br />
The Nix package manager also maintains a history of installations and—since the Nix store is full of immutable installs of applications and libraries—Nix provides a way to rollback installs to a previously working configuration. NixOS extends this concept all the way to the Linux kernel. When you change the system configuration for your machine you get a new GRUB entry for that entire profile of your computer: kernel, applications, libraries, and even your configuration files. If something goes badly you just reboot and select the GRUB entry for your previous configuration. Check out section 1.4 of the NixOS manual for some cool details <a href="http://nixos.org/nixos/manual/#sec-upgrading">http://nixos.org/nixos/manual/#sec-upgrading</a>.<br />
<br />
But wait there's more! These installations are atomic, so it either completes successfully or not at all. So basically what we get here is something like an ACID MVCC for the software on your computer.<br />
<br />
There are definitely practical benefits to all this. There have been times that I've done some riskier upgrades. Moving to a new Ubuntu release is one of those times. There's always a bit of a question as to what may or may not be broken. I usually back up my files and do a fresh install. I've also been bitten a couple of times by a bad video driver upgrade that was annoying to recover from. That said, I don't know that I've really needed an ACID MVCC for software all that often.<br />
<br />
Aside from riskier installs you can also setup a Nix configuration for each project you work on. You can chroot into this Nix environment. The Nix configuration can be version controlled, so that when you checkout an old branch you can use the environment that matches that version of your project. You can also take this Nix configuration and create a VM from it, or send it off to an EC2 node. Admittedly you can do all of this stuff just using the Nix package manager on Linux or OSX without needing NixOS. (Here is how <a href="http://zef.me/5966/setting-up-development-environments-with-nix">http://zef.me/5966/setting-up-development-environments-with-nix</a>.)<br />
<br />
<h3>So, really...Why?</h3>So all this stuff is great, but maybe not worth switching to NixOS for your day-to-day desktop machine. I have moved for a couple of reasons. Lately I've been getting disillusioned with the direction Ubuntu is headed. I barely made it through the last LTS upgrade. I prefer to use Gnome and XMonad, but Ubuntu comes with Unity these days and it's hard to get a plain Gnome install (though I made it work). The flip side is that Ubuntu is a really smooth install where most everything just works out of the box, but that's not really enough to keep me there. I'm basically forced to consider some kind of a major change whether it is a change in distribution or the tools I use or something.<br />
<br />
I also like to try to change things up every once in a while. I've been on Ubuntu for close to 10 years and Gnome+XMonad for maybe 3 years. I need to shake things up a bit, learn some new stuff, and feel like a hopeless newb. Boy did I ever accomplish that!<br />
<br />
<h3>Attempt 1: No Gnome & Miserable Failure with Wireless</h3>I downloaded and burned the NixOS 13.10 minimal CD. It booted up fine and gives you hints to walk you through the process. The CD boots with a copy of the manual on one of the Linux consoles so you can jump back and forth to reference the manual as you're installing. The process is pretty simple, too. You format a partition, mount it, run a config generation program, and customize the generated config. Once the config is set you run an install command and boot into your new NixOS installation.<br />
<br />
However, Gnome doesn't install. Based on a reading of the docs you might be led to think that it does, but after 13.10 was released this commit was made to the Nix packages: <a href="https://github.com/NixOS/nixpkgs/commit/a734f32fa1efce21a008e391cfb10695c4d738cd">https://github.com/NixOS/nixpkgs/commit/a734f32fa1efce21a008e391cfb10695c4d738cd</a>, so it's not even an option in more recent versions of NixOS.<br />
<br />
Coincidentally XMonad doesn't install either. Well it seems like it does, and if you use it without any custom config then it's probably fine, but to use a custom config you need a Haskell compiler and the XMonad packages. For some reason when the XMonad packages install they don't install the dependencies that they need (which seems broken given how Nix is supposed to work). I had to iteratively run `ghc-pkg check` and install a bunch of packages manually. Perhaps I was missing something. Also, not to give away the ending, but this was with 13.10, so it may be a different story with the latest unstable NixOS.<br />
<br />
Finally, I had wireless issues. The installation CD booted and I was able to get an install, but once I tried to find some combination of CLI and/or GUI software to configure my wireless adapter I wedged something, and decided to just reboot and reinstall, but the installation CD was hanging when starting the WPA supplicant. It seemed maybe there was some state stored in the wireless card that was causing the WPA supplicant to hang, but I couldn't boot NixOS in order to configure the card. After many hours fiddling with this to try to get the wireless working and being very frustrated I very nearly gave up on NixOS. As a last ditch effort I decided to try using the unstable version of NixOS.<br />
<br />
<h3>Attempt 2: Using Unstable NixOS</h3>Out of the gate Unstable NixOS was more stable than the 13.10 release. I was able to actually boot the thing! I made it through the installation process. I had decided that I was going to use Xfce+i3 instead of Gnome+XMonad. Changing my desktop and window manager at the same time as changing my distribution was going to make this whole process more "interesting."<br />
<br />
Configuring NixOS is relatively simple once you know what the configuration options are, but finding them isn't as easy as it could probably be. The NixOS manual has a section of configuration options at the end. That was an important reference. I don't know if it documents every possible option, so sometimes I would take a look at the Nix package source. There are also configuration options that affect some packages that aren't entirely obvious unless you look at the package source. I think in the future it would be nice (in addition to the way the documentation is currently organized) to have documentation that can be browsed per package, so you could see for a particular package what options affect it.<br />
<br />
Having a working distribution is since, but there were still some annoyances. Searching for packages is also a bit unsatisfying. The recommended way to search seems to be `nix-env -qaP '*' | grep NEEDLE` which works, but it's not the most performant, and you also can only search by package name; you can't search by description or configuration options or anything else. Even when you've found the package you want more detail on, it is not obvious how to find the package. You can browse the nixpkgs source, but it is organized by categories. The best way seems to be doing a `find / -name NEEDLE` or something. None of this is really a deal breaker (for me), but just lessons learned on how to find information.<br />
<br />
Finally, on the subject of finding information the nix-dev mailing list is a good resource. I ended up searching it with google "site:http://lists.science.uu.nl/pipermail/nix-dev/ NEEDLE". I guess it is also possible to search the Gmane archive <a href="http://news.gmane.org/gmane.linux.distributions.nixos">http://news.gmane.org/gmane.linux.distributions.nixos</a>.<br />
<br />
<h3>Adaptations</h3>There were definitely tools that I'm used to using that were available and worked great on NixOS, among them were Conkeror, Emacs, OpenJDK, Ruby, Tmux, and Skype.<br />
<br />
There were replacements I had to make. Obviously Gnome+XMonad became Xfce+i3, and this had a cascading effect.<br />
<ul><li>I had a raise-or-run script that used wmctrl to activate a program's window, but wmctrl doesn't work with i3, so I had to replace that script with one that uses i3msg.</li>
<li>I spent a bit of time tuning my i3 config to have bindings similar to the ones I was using with XMonad.</li>
<li>Previously I was using gnome-panel for workspaces, wireless control, sound volume control, date/time, etc.; that got replaced by i3bar and i3status.</li>
<li>I was using swarp to shuffle the mouse cursor out of the way in Gnome+XMonad, and that package wasn't available on NixOS, so I replaced that functionality with xdotool.</li>
<li>I tried to get NetworkManager up and running, and there appear to be configuration options to get it set it up on NixOS, but I ran into issues. I ended up just installing the wpa_gui tool to manage my wireless connections.</li>
</ul>I also had a cache of scripts that I used for various things. Some of them were for building programs from source, but I think I can drop those and just use Nix to accomplish the same thing. Others I had to replace '#!/bin/bash' with '#!/usr/bin/env bash' since NixOS doesn't have things at fixed, standard locations like other Linux distributions.<br />
<br />
There are some other minor issues to work out. For one my sound card seems to be detected just fine, and Skype will use it, but Xfce doesn't seem to be playing any sounds. I just get a system beep for "sound" events. I think this is an issue with the configuration of Xfce and its sound theme or something. Not really a major issue for me, but I'd like to figure it out eventually.<br />
<br />
I would also like to get Dropbox installed. It actually worked fine on 13.10, but for some reason it isn't working so well on unstable. I haven't had a lot of time to look into it, and again not a high priority. I may take the opportunity to look into OwnCloud instead.<br />
<br />
<h3>Conclusion</h3>I went into this expecting it to be rough at times, and expecting to make some changes to the tools I was used to using. That was no surprise. I was a bit disappointed in how the latest official release of NixOS turned out. The flip side is that the Unstable release has worked great so far, and I'm a little more comfortable following the bleeding edge of NixOS given that (theoretically) I should be able to rollback any changes that bork my system. I'm hoping that turns out to be true.<br />
<br />
If you're willing to learn something new and possibly snag your sweater on some rough edges, then NixOS could be a fun experience for you. :) If you use some of the same tools as me, then hopefully what I've discovered can help you move to NixOS. If you want something a little more polished, then you may want to wait on NixOS.<br />
<br />
<h3>References</h3>My configuration.nix<br />
<a href="https://gist.github.com/pjstadig/8183688">https://gist.github.com/pjstadig/8183688</a><br />
Dotfile changes moving from Ubuntu+Gnome+XMonad to NixOS+Xfce+i3<br />
<a href="https://github.com/pjstadig/new.dotfiles/compare/455374...master">https://github.com/pjstadig/new.dotfiles/compare/455374...master</a>Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com3tag:blogger.com,1999:blog-8086052983184217440.post-73602590825397445932013-12-22T09:10:00.001-05:002013-12-22T09:10:52.946-05:00A Proof of the Braininess of "The Simpsons" and "Futurama"<blockquote>Occasionally the mathematics does wind more deeply into the story, most notably in the 2010 “Futurama” episode “The Prisoner of Benda.” The plot turns on a device called the Mind-Switcher, which performs just the function its name suggests; over the course of the episode, as the apparatus is used with greater and greater abandon, the minds of the characters shuttle from body to body like singles switching bedrooms in a French farce. By the end, not a single consciousness remains in its proper skull. What’s worse, the characters can’t just retrace their steps to reunite each mind with its original body; the Mind-Switcher, having operated on a pair of minds, isn’t allowed to switch the same two minds again.<br />
<br />
Ken Keeler, who wrote the episode, realized that in order to get everything sorted out it might be necessary to introduce new characters, whose bodies could be used as waystations through which the minds could find their way home. An ordinary writer would have been content simply to find a way out of the episode. But Keeler became obsessed with the problem in its full generality, finally composing a proof that, no matter how wild the original fiesta of Mind-Switching, the damage can always be repaired once two new people are added to the system. This question may sound abstruse, but the part of math it belongs to — “combinatorial group theory” — is one of the hottest things going at the moment, with major advances popping up everywhere from Paris to Los Angeles. Keeler’s theorem isn’t one of those big advances, but it’s a real theorem, certainly the deepest piece of mathematics ever featured in a prime-time sitcom.</blockquote><br />
-- Jordan Ellenberg, "Mathematics and Homer Simpson" (<a href="http://www.washingtonpost.com/opinions/2013/12/20/ef1bfaa6-5b9a-11e3-bf7e-f567ee61ae21_print.html">http://www.washingtonpost.com/opinions/2013/12/20/ef1bfaa6-5b9a-11e3-bf7e-f567ee61ae21_print.html</a>)Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-30307943400832721152013-12-16T06:09:00.000-05:002013-12-16T06:09:40.243-05:00The Notion of "Real-Time""I think the current evolution of technical language around web developers has made real-time mean: 'consume information as soon as it is available' and not '_react to the information in a timely manner or this car will crash_'."<br />
<br />
-- Alvaro Videla, "Tell me more about your real-time systems" (<a href="http://videlalvaro.github.io/2013/12/tell-me-more-about-realtime.html">http://videlalvaro.github.io/2013/12/tell-me-more-about-realtime.html</a>)Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-54047531783801717662013-07-31T08:46:00.001-04:002013-08-13T07:46:33.388-04:00Remote != DistributedIn an age when companies seem to be shutting down their remote work programs, I'm here to tell you that a distributed team can work, and when it works well it is a great experience. I have worked on a distributed team for the past three years. Prior to that I worked for four years in an office that had one full-time remote employee and allowed occasional work from home for others.<br />
<br />
Working from home and working remotely are not the same thing as working on a distributed team. If there is a center of gravity at an office and a few remote employees orbit that office, then there is a different experience for some people than for others, and there is a different way of working for some people than for others. There is a natural imbalance that an inanimate system would want to equalize, but in a human organization people pull in opposite directions.<br />
<br />
A distributed team is a team where no one is physically co-located. There is no center of gravity at an office. There is no imbalance, because everyone has the same experience, and everyone has the same way of working.<br />
<br />
When there is an office/remote split, there is a different experience for some than for others. The office people have shared experiences like water cooler chat and birthday celebrations, and the remote people are left out. The office people gather in a conference room for a meeting and dial in the remote people over a crappy speakerphone. The experiences are different which affects morale and cohesion.<br />
<br />
On a distributed team everyone has the same experience. You find new ways to create shared experiences that are different from office experiences and unique to distributed teams. You use tools that allow meetings to be high quality experiences for everyone. These shared experiences support morale and cohesion.<br />
<br />
An office/remote split means different ways of working for different people. In-office employees may depend on ad hoc hallway conversations, but remote employees are not privy to those. In-office employees have an "out of sight, out of mind" mentality and think that remote employees aren't productive because they aren't communicating with the remote employees. Remote employees feel disconnected and lack direction for the same reasons.<br />
<br />
On a distributed team everyone communicates through the same channels, and everyone can see the activity about who is doing what. Since there aren't two different communication channels there isn't an "out of sight, out of mind" problem.<br />
<br />
When there are different experiences, different ways of working, different communication channels at the same company, then two different cultures develop, and that is detrimental. A permanent office/remote split is doomed to failure from the beginning, so it's no wonder that companies are killing their remote work programs.<br />
<br />
What does it look like to do a distributed team right? I can tell you what has worked for me and my company. I do not expect that our experiences are universal, but I will try to draw some general principles.<br />
<br />
Distributed teams that work well:<br />
<ul><li><strong>Use tools to fix work into a tangible medium of expression (to borrow from copyright law).</strong> Office collaboration via ad hoc hallway conversations and whiteboards doesn't work for distributed teams. You have to produce Google Docs, tickets in Jira, etc. These are things to which everyone can have access. Frankly, distributed or not, this is a better way of working than informal ideas banging around in people's heads.</li>
<li><strong>Communicate through many-to-many channels.</strong> Of course there is still a need for one-to-one communication, but using many-to-many communication channels means that everyone can feel a part of the company. Just like others can walk by and join in on hallway conversations, people can observe and contribute to Skype calls and chat rooms. It is also a good idea to have logs of these chats that people can catch up on (see above).</li>
<li><strong>Adopt processes that encourage collaboration and productivity.</strong> Working on a distributed team may not be for everyone; to a certain extent it does take a self-motivated individual. However, pair programming and daily standups create an environment that encourages collaboration and accountability. We also tend to hire people mostly in North and South America so that the timezones line up better for collaboration. We also meet face-to-face 3-4 times a year, which helps to develop the personal relationships that are necessary to working well together.</li>
</ul>Realize that the idea of "remote" working does not work when combined with an office culture and a center of gravity in one location. Working from home or remotely may work temporarily, but eventually gravity will pull you back towards the office. If you try to maintain this unnatural office/remote balance you will enforce different experiences and methods of work and create two different cultures in your company. This cultural divide will lead to jealousy, resentment and other morale and cohesion problems.<br />
<br />
This does not mean that it is impossible to function unless everyone reports to the same office. It just means that different tools and processes must be developed to facilitate a distributed team. You must use tools and processes that encourage tangible expression of ideas, many-to-many communication, and collaboration and productivity.<br />
<br />
Remote working is isolating and doomed to failure, but a distributed team done right is a joy.Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com1tag:blogger.com,1999:blog-8086052983184217440.post-53114228630363735062012-11-28T09:00:00.001-05:002012-11-28T09:01:15.664-05:00Evaluating CLEAR Wireless Internet<p>Here's my current situation: I have Cox High Speed Internet at home in Fairfax County, Virginia, and I have a Verizon Wireless smart phone with the HotSpot feature enabled. I've been intrigued by Clearwire for a while, and I'm always on the hunt for improving my Internet access at home and on the road. Though I've been fairly happy with HotSpot access through Verizon (though not so happy with my HTC Rezound, but that's a story for another time), Clear has a 15-day money back guarantee, and no contract, so it seemed like a no brainer to try them out. Also, I checked their coverage and I should get a good signal at my house. </p>
<p>
I think in an ideal world I'd like to decouple my smart phone and my mobile Internet. I'd like to have a smart phone with minutes and texting, and connect it to the Internet (as well as other devices) through a mobile HotSpot, like Clear. However, it doesn't look like that plan is going to come to fruition any time soon. For one thing, Verizon won't allow me to have a smart phone on their network without some kind of data plan. </p>
<h3>How does Clear stack up?</h3>
<p>The experience signing up for and using Clear has been really great. (I'll let you know how it goes returning the device and getting my money back...but I'm getting ahead of myself.)</p>
<p>
I signed up on their website, and received the device the next day. I plugged in the device and it connected to the 4G network. I was able to connect my computer to the HotSpot device without any problems at all.</p>
<p>
I tested the "4G Internet Basic" plan which has an advertized speed of 1.5Mbps down and 500Kbps up for $34.99 per month. According to speedtest.net I was getting 1.92Mbps down and 460Kbps up with a ping of 72ms.</p>
<p>
It looks like you get what you pay for, and the experience is very smooth, but for me the price/speed isn't as good as I can get elsewhere. The HotSpot on my smart phone gives me 9.93Mbps down and 3.21Mbps up with a ping of 58ms for $30 on top of my unlimited data plan. (I'm grandfathered in on the Verizon plan I'm on. I'm not sure what the numbers would look like if I were to sign up today.) And my Cox High Speed Internet gives me 39.48Mbps down and 12.33Mbps up with a ping of 18ms for $40 per month.</p>
<p>
Clear has a "4G Internet Plan" which has an advertized speed of 6Mbps down and 1Mbps up for $49.99 per month, but that is getting on the steep side for me, and the analysis doesn't help Clear. </p>
<p>
Comparing my options, $1 will purchase 1Mbps from Cox, 0.333Mbps from Verizon, 0.059Mbps from the 1.5Mbps Clear plan, and 0.122Mbps from the 6Mbps Clear plan.</p>
<h3>Conclusion</h3>
<p>I would really like to use a service like Clear, and I have no complaints about the Clear experience. However, it is not the best value for me. If Clear was charging $6 per month for their 6Mbps plan, I'd be all over it, maybe even if they charged $10 per month for their 6Mbps plan. However, as it is, I'm going to pass.</p>Paul Stadighttp://www.blogger.com/profile/04475151533455732056noreply@blogger.com0tag:blogger.com,1999:blog-8086052983184217440.post-76505561268055912492012-11-09T17:38:00.000-05:002012-11-09T17:38:12.993-05:00"Low Level" Programming<p>Given the type of work that many programmers do, it is hard to find value in
studying such "low level" concepts as operating systems, garbage collection,
digital signal processing, etc. These are solved problems. There are libraries
for them, and they most certainly are not relevant to designing web
applications.</p>
<p>This is like the age old "why do I have to study calculus since I'll never use
it?" conundrum. I believe it just requires the right perspective to see the
value in studying low level programming concepts.</p>
<p>Consider two problems. First, imagine you are working at an e-mail archiving
company. Since many of your customers are businesses (not necessarily
individuals), you are storing some e-mails that are sent from someone inside the
business to others inside the business. In this case you decide to de-duplicate
the e-mails in your archive, and only store a single copy of such e-mails.</p>
<p>However, you want to be able to delete the e-mail when all of the associated
users have deleted it, so you need to keep track of which users are connected to
each e-mail, and when they have cut their association with it.</p>
<p>Does this problem sound familiar? It should. It is very similar to automatic
memory management. You have an object and many pointers to it, and you want to
know when you can delete the object. Maybe automatic memory management doesn't
match this problem at every point, but there is probably much to gain from
studying garbage collection algorithms.</p>
<p>Now consider a second problem (and more tenuous connection :)). Imagine that at
this e-mail archiving company you want to synchronize a directory tree (like an
LDAP directory) of users and groups. You will take snapshots of a customer's
directory and store them, and then, for purposes of searching, use the
information to determine whether a user was part of a particular group at some
point in time.</p>
<p>You could certainly find the snapshot closest to the time in question, and see
whether the user was part of a particular group. This is one way of
interpolating between data points. Perhaps this problem is similar to
reconstructing an analog audio signal from digital samples, or regression
analysis. I admit this connection is more tenuous, but it may be there are
techniques to be gleaned from work in these other areas.</p>
<p>The algorithms and data structures that we create in Computer Science are
abstract, and I think with the right perspective they can be applied in many
different situations. The next time you are sorting a deck of playing cards,
use quicksort!</p>paulhttp://www.blogger.com/profile/14647609048389725132noreply@blogger.com0