Wednesday, September 12, 2018

Engineering Synthesis

What is the nature of software engineering? How is it different from other kinds of engineering? Why is it so hard?

These are questions I have struggled with for many years. In my work, I have seen more than a few different takes on software engineering. Even when things start out right they seem to end at a sad place, and this has bothered me. Is it really impossible to do software "right?" Or do we just have the wrong idea about how to do it? Software engineering is a relatively new discipline, so maybe we still have some things to learn.

I'm going to draw from several sources here, and try to synthesize some ideas about engineering, science, and art. I feel kind of silly writing all these words summarizing other sources when you could just go watch the videos and read the papers yourself. But for my purposes these sources are a framework for discussing and organizing my thoughts.

Real Software Engineering

"Real Software Engineering" by Glenn Vanderberg
http://www.infoq.com/presentations/Software-Engineering

Glenn Vanderberg is a software practitioner, and he is reacting to the claim that software engineering needs to grow up and become a "real" engineering discipline. But what is "real" engineering?

There are actually a couple of different versions of this talk available online, and in one Vanderberg takes some time to talk about "how did we get here?" He digs up some history on the NATO conference in 1968 whose goal was to define software engineering. He then talks about some commonly believed myths about engineering, about how different engineering disciplines use different methods, then brings it back around to software engineering and applies what we've learned.

There were three big ideas from Vanderberg's talk that stood out to me:

  1. The model of scientists discovering knowledge and engineers then applying that knowledge is wrong.
  2. Software engineering is unique because we spend a lot of time crafting design documents and models and a trivial amount of time actually producing the end product, which is the exact opposite of most other branches of engineering.
  3. Agile methods are the best methods we have and for all practical purposes they are software engineering.

When I first watched Vanderberg's talk years ago, the big idea was the second—about the uniqueness of software engineering—but coming back to it later I was surprised to find this first idea echoed in other sources. Vanderberg gives a examples of advances in knowledge that came not from academics or scientists, but instead from practitioners and engineers. One example is Robert Maillart. He was an engineer who revolutionized the use of reinforced concrete in bridge building. He did this before there were mathematical models to explain the uses and limits of reinforced concrete. Scientific advances are just as likely to come from practitioners as from academics.

My second idea from Vanderberg is that among the kinds of engineering, software engineering has some unique characteristics. If one were to build a skyscraper, one would construct designs, models, blueprints, then those would be handed over to a construction team who would construct the building. The blueprints are relatively cheap to produce. The actual construction is error prone and requires a lot of materials and labor. Looking at this process, it would seem very important to focus as much effort on the architecting of blueprints as possible. Once you've laid the foundation, it is expensive to rethink the footprint of the building.

If I were to apply this process to software engineering I might do something like the following: Hire a system architect to create a design document, and then get a bunch of code monkeys to actually construct the system by writing code. In my interpretation, the requirements and design document are the model and blueprints, the system architect is the architect, and the code monkeys are the construction crew. Vanderberg picked up an insight from Jack Reeves in the 90's: this interpretation is wrong.

Customers do not pay for code, they pay for an executable. They want a working system. That is the constructed product, and it is the compiler not the code monkeys that produces it. The code is the design document and mathematical model. The code monkeys are not the construction crew, they are the architects. Source code and its type systems are a mathematical model that can be formally verified. Using a compiler, I can produce a prototype from that model instantaneously and for free. The source code also contains documentation, and to the extent that it has automated tests (also written in the same language) it is self verifying. Modern high level languages and domain specific languages can even be mostly understood by domain experts.

Software engineering is a unique engineering discipline, because source code is a unique artifact. We should be careful not to take engineering methods from a discipline where constructing a prototype is time consuming and expensive and one is necessarily forced to spend more time on up front design to avoid that cost. This will lead nicely into my third big idea, that agile methods are for all practical purposes the best kind of software engineering we know.

When I say agile methods, I mean agile with a little 'a'. I'm thinking (vaguely) of an incremental tinkering approach, versus a straight line mechanical approach. I'm thinking of a technician approach, versus a technique approach. Or as the original Agile Manifesto said, "people over process." I think they got that right. What is interesting is they were not the only ones to get it right. The original NATO conference on software engineering (1968!) had it right before they had it wrong.

There were two NATO conferences that were a year apart. At the first session Alan Perlis summarized the discussion on system design:

  1. A software system can best be designed if the testing is interlaced with the designing instead of being used after the design.
  2. A simulation which matches the requirements contains the control which organizes the design of the system.
  3. Through successive repetitions of this process of interlaced testing and design the model ultimately becomes the software system itself. I think that it is the key of the approach that has been suggested, that there is no such question as testing things after the fact with simulation models, but that in effect the testing and the replacement of simulations with modules that are deeper and more detailed goes on with the simulation model controlling, as it were, the place and order in which these things are done.

What he is saying is:

  1. Test early, test often.
  2. Take a breadth first approach mocking out what you need so you can get a sense for the overall system.
  3. Iteratively refine the system and replace the mocks.

That is suspiciously similar to an incremental development method. Between the 1968 NATO conference and the 1969 NATO conference things changed, and there was a clear tension between those who thought programming was best done by an expert technician, and those who thought programming was best done mechanistically by someone taught a body of scientific techniques. At the end of the 1969 conference, Tom Simpson gave a talk called "Masterpiece Engineering" which is oozing with conflicts of technician vs. technique.

There was definitely a lot of political maneuvering at the NATO conferences. There are some other resources you can investigate if you'd like. The point is the seeds of agile were there, but for some reason we ended up with 33 years of waterfall.

Engineering(,) A Path to Science

"Engineering(,) A Path to Science" by Richard P. Gabriel
http://www.infoq.com/presentations/Mixin-based-Inheritance

"Structure of a Programming Language Revolution" by Richard P. Gabriel
http://dreamsongs.com/Files/Incommensurability.pdf

Richard Gabriel's talk comes from an interesting perspective. He was involved in the Lisp community and has an academic background (he earned a PhD), but is not an academic. After working as a practitioner, he went back to school to earn a Masters of Fine Arts. Upon returning to the technical community, he felt a paradigm shift had happened while he was gone. The conferences he used to attend had been renamed and were now focused on academics instead of practitioners. His entire field--Lisp systems engineering--and its journals had been deleted.

Then he was given the first scientific paper on mix-in inheritance. Being familiar with previous work done on Lisp based inheritance systems, he felt that this paper was using the same terms to describe some of the mechanisms from the Common Lisp Object System, but the terms had different meaning. Gabriel felt he was experiencing incommensurability, that a paradigm shift had happened from an engineering focus to a scientific focus, and now "scientific" papers were being written that described, as new, things that engineers had already known, using the same terms but with different meanings.

The talk is definitely worth watching. It is an interesting personal story intertwined with technical discussions of the previous work versus the paper he had been given. It is an exploration of whether incommensurability can actually happen and to what extent. He also challenges the myth that science always precedes engineering.

I'm honestly not sure whether Gabriel intended his talk and paper to have a single point. Maybe he is mostly interested in relating his personal experience, but this is what I took away:

  1. In general, science does not always precede engineering, and in particular the relationship between computer science and software engineering is even more complex, because the engineers literally create the reality that the scientists study.
  2. There are two approaches to software: the systems approach, and the language approach.
  3. Making engineering subservient to science means throwing away the progress that engineers can and do make.

This was actually the first talk that started the wheels turning for me on the relationship between science and engineering. I had been told in college that scientists expand the body of knowledge and engineers apply that body of knowledge. Gabriel uses as his example the steam engine. When the steam engine was invented the popular theory used to explain its operation was the Caloric Theory of heat, which stated that there was an invisible, weightless, odorless gas called "caloric" that permeated the Universe. The amount of caloric in the Universe is constant, and its interaction with air molecules can explain heat and radiation, and from it you can deduce most of the gas laws. The Caloric Theory was a useful theory with predictive power. When Laplace adjusted Newton's pulse equations to account for caloric, he was able to more accurately predict the speed of sound.

Eventually the Caloric Theory was replaced by Thermodynamics, and amazingly steam engines continued to work! The steam engine was developed by mechanics who observed the relationship between pressure, volume, and temperature. Whether its operation was explained by the Caloric Theory or Thermodynamics made no difference to them. Yet, an engineer's invention can and does spark the curiosity of a scientist to develop a theory to explain how it is that an invention works. This is even more true in the case of computer software.

The second moral I drew from Gabriel's talk is that there are (at least) two approaches to software: a systems approach and a language approach. Gabriel acknowledges that at first he thought the incommensurability that he saw was a difference between an engineering paradigm and a scientific paradigm, but eventually he saw it as a more technically focused conflict between a systems paradigm and a language paradigm. Perhaps what Gabriel means is that you can approach either systems or languages from an engineering or a scientific perspective. However, I tend to see systems versus languages as engineering versus science.

The systems paradigm views software as interacting components forming a whole; real stuff doing real things. The language paradigm views software as abstract signs and rules of grammar conveying meaning. Good design, from a systems perspective, comes from a skilled technician following good design principles (I would even call it aesthetics). Good design, from the language perspective, comes from a relatively less skilled technician working within a language that from the outset excludes bad design through grammatical rules and compilers. The system approach tends to view software as a living organism that is incrementally poked and prodded, changed and observed. The language approach tends to view software as a series of mathematical transformations, preserving meaning. If each of the paradigms were a theory of truth, the systems paradigm would be correspondence, and the language paradigm would be coherence.

I see system versus language as engineering versus science. I view engineering as a bottom up, incremental, tinkering approach, at least when it comes to software and the way I like to practice software engineering. I view science as a top down, formal, mathematical approach. I actually like both, and I think both have their place, but when engineering is made subservient to science, we're actually losing something very important. When engineers are shut out of conferences and journals, there are discoveries that will be left unpublished, and new scientific theories left untheorized. (This was what Gabriel saw happening.)

Computer Programming as an Art

"Computer Programming as an Art" by Donald Knuth
http://dl.acm.org/ft_gateway.cfm?id=1283929&type=pdf

For those with even a cursory exposure to Computer Science, Donald Knuth needs no introduction. Knuth is coming from an academic perspective, but even for an academic his perspective is a bit unique. He has created and maintains several large open source software projects. This is his ACM Turing Award lecture given in 1974. He starts by quoting the first issue of the Communications of the ACM (1959). It claims that for programming to become an important part of computer research and development (to be taken seriously) it needs to transition from being an art to a disciplined science.

The big idea I draw here is: Programming can be art (in the "fine art" sense), which means it is (at least sometimes) a creative endeavor.

Knuth first explores the definition of "art" and "science." He looks at their use over time. Their use was (and is) not consistent. At times "science" and "art" are used interchangeably. "Art" was used to describe something made of human intellect, not nature. Eventually "science" came to mean "knowledge" and "art" came to mean "application." Though even that usage is not universal. To Knuth an "art" is something that is not fully understood and requires some aesthetics and intuition. A "science" is something well understood. Something that can be mechanized and automated. It is something that can be taught to a computer. Can computer programming be taught to a computer?

Knuth does not think that programming can ever be fully automated. However, it is still useful to automate as much as possible, since it advances the artistry of programming. He believes, and cites others, that progress is made not by rejecting art in the name of science, nor science in the name of art, but by making use of both. He makes reference to C. P. Snow's "The Two Cultures" as an example of another voicing concern about separating art and science. At this point when he speaks of art he means something more along the lines of "fine art" than "engineering."

Knuth goes on to talk of creativity, beauty, art, and style. He hits on how sometimes resource constraints can force a programmer to come up with an elegant solution, and this has an artistic aspect to it. He also encourages people to, when it comes to programming, make art for art's sake. Programs can be just for fun.

Knuth's talk is focused on the act of programming, and when he deals with engineering versus science he means with respect to the act of programming. To what extent can the act of programming be made automatic? To what extent must it remain a human act of creativity? This is a little further afield of the previous sources, but Knuth's insistence on seeing programming as a creative act is the big idea I drew from his talk, and is really the point of his talk.

Given that programming can sometimes be a creative act, it raises a lot of questions in my mind. Is programming always a creative act? If programming is a creative act, how should a programming project be managed? Is the high failure rate of software projects related to this? Perhaps this ties back into Tom Simpson's "Masterpiece Engineering" satire. Imagine a project manager with a room full of artists creating Gantt charts and task dependency graphs to plan out the creation of a new masterpiece!

On the other hand, nothing appeals to the ego more than seeing oneself as a grand master of art. There should be a measure of moderation here. I think there is benefit to trying to understand programming as an artistic (or at least "creative") endeavor, whatever that means, but we should not go crazy with hubris.

Better Science Through Art

"Better Science Through Art" by Richard P. Gabriel and Kevin J. Sullivan
https://www.dreamsongs.com/Files/BetterScienceThroughArt.pdf

"Better Science Through Art" by Richard P. Gabriel
https://www.tele-task.de/archive/video/flash/12636/

I have already covered some of Gabriel's background, but I will say that having been involved and educated in both a technical field and an artistic field gives him a unique perspective on the relationship between science, engineering, and art.

I unfortunately don't know much about Sullivan's background, other than he is a professor of computer science at the University of Virginia. His collaboration with Gabriel produced one of my favorite papers ever. I don't know that I can tease out what should be attributed to whom. I will be basing my comments on Gabriel's talk, but I don't intend to attribute everything to him, or to diminish Sullivan's contributions.

The big ideas I drew from this is:
  1. Science, engineering, and art all have at their core "disciplined noticing."
  2. Disciplined noticing is a skill that requires practice.
  3. The creation of knowledge—even in the case of science—requires an abductive leap powered by creative spark.

This is a really great talk, and covers a lot of ground. It is entertaining, insightful, and very worth watching. He attacks some common caricatures of science, engineering, and art, and digs into the actual process behind each. In the end, he finds that there are a lot of similarities to the methods in science, engineering, and art. It is a process of exploration, discovery, and verification. He calls it disciplined noticing.

I have found this to be true in my experience. Just like people have a caricature of science, that it is straight line progress, the monotonic aggregation of knowledge, there's a similar caricature of software development. My experience has been that writing software is a creative, exploratory process. Sometimes I go down an alley, but find that I need to back out and take a different turn. I may write a test, run it, change some code, change a test, run it, think for a while, delete a bunch of code and rewrite it all.

In my experience this process—writing, evaluating, and rewriting—has much more in common with writing a novel than constructing a building.

Conclusion

This long meandering post must come to an end. First of all, I would highly recommend looking at each of these cited sources. They will reward you. Perhaps you may even find that I have seen them through my own preconceived notions, and you may draw an altogether different conclusion from them. So be it.

This "conclusion" is not really a conclusion, but a way-point. I started on this journey to understand the nature of software engineering, how it is different from other kinds of engineering, and why it is so hard. I ended up at a place that intuitively I knew I would end. I will not make an absolute statement. I will say that at least sometimes (and in my experience) software development is a creative process more akin to creative writing.

I have also seen that there is a tremendous amount of creativity in both engineering and science. I believe that at the core of engineering, science, and art is a drive to understand and influence the world, which requires observation, testing, and evaluation. I don't claim to know how to do software engineering "right," but I don't think we will ever do it right if we refuse to see that creativity (which is at times unpredictable) is a key part of the effort.

I have learned that both engineering and science are useful for discovering and validating knowledge. Scientists and engineers should collaborate. Neither should be seen a primary at the expense of the other. They can even be seen as external expressions of the same process sometimes using similar tools and techniques.

I have learned that software is unique in engineering. Whereas a blueprint is a written artifact using specialized notation, the building it describes must be brought into existence through a complex error prone process. Code is written using specialized notation, but the gap from code to execution is much smaller. There are pitfalls and challenges, no doubt, but I would like to see how the nature of what we produce can change how we produce it. I'm still holding out hope that the nature of software can change the face of the human organizations that produce it.

Practically, what this all means is that a software engineering process should be iterative. It should embrace unpredictability and allow space for the creative process. In the same way that a painter never thinks his painting is complete, software should be developed in a way that continuously produces value, so the project could be closed down and the product shipped at any point, and the customer is still happy with the result.

So I end back at the beginning with Vanderberg. I don't think that agile is the last word, but I think it is the best we have so far.

Manufacturing Creativity

Previously, I've attempted to convince you that making software is a creative act, and I explored the implications for pursuing and managing software engineering. (By the way, science and engineering are also creative acts, and a great exploration of that idea is "Better Science Through Art" by Richard P. Gabriel and Kevin J. Sullivan. Love that paper.)

I've been thinking a lot lately about creativity and how it can be encouraged (even manufactured?). I've also been thinking quite a bit about why people do (or do not) take on ambitious projects, and how to survive a years long ambitious project. I've learned some very interesting things that some day I may write about, but I'd like to share what I've learned about creativity.

What I've discovered about being creative is that even from people in very different lines of work (actors, writers, artists, programmers, scientists, investors) there's a surprising amount of agreement about how it works. I've also discovered that it is not an innate talent that some people have and some do not. Everyone has the tools to be creative.

In many ways this goes all the way back to the very first Clojure Conj in October of 2010. Rich Hickey gave a talk titled "Step Away from the Computer"...actually, it had three titles, and it is best known by one of its other titles "Hammock-Driven Development." I was there in person. I came away with the mistaken impression that the talk was about writing software and solving technical problems. I now know that making software is a creative act, and Rich's talk was about how to be creative.

For Rich, the engine of creativity is the "background mind," which is in contrast to the "waking mind." Your waking mind is your normal mode of operation. It is good at analyzing and thinking critically, but can be too tactical and get stuck in local maxima. Your background mind is good at making connections, thinking abstractly, and synthesizing. It can make the leap past local maxima, unfortunately your background mind cannot be tasked directly. However, you can task it indirectly by obsessively thinking and reading about a particular problem, and, though you can activate it other ways, it is easiest to activate it by sleeping, or relaxing and simulating sleeping (i.e. using a hammock).

So, creativity is an indirect process of a relaxed mental mode that you task by obsessively thinking about a problem, and whose products you only filter after the fact with your normal critical-analytical mental mode. Now here's the surprising part, almost everyone who attempts to describe their creative process describes it similarly. In his essay, "The Top Ideas in Your Mind," Paul Graham says:

Everyone who's worked on difficult problems is probably familiar with the phenomenon of working hard to figure something out, failing, and then suddenly seeing the answer a bit later while doing something else. There's a kind of thinking you do without trying to. I'm increasingly convinced this type of thinking is not merely helpful in solving hard problems, but necessary. The tricky part is, you can only control it indirectly.

John Cleese gave a talk on creativity, and he called the background mind "open mode" and the waking mind "closed mode." In your open mode, you are relaxed, less purposeful, curious, and a bit playful. In your closed mode, you are active, determined, and have a critical eye.

George Land is a business man who investigated how to stimulate and direct creativity. He found there are two kinds of thinking: divergent and convergent. Divergent thinking is creating new ideas. Convergent thinking is judging and evaluating ideas. He did a longitudinal study that found that 98% of 5 year olds exhibit divergent thinking, 30% of 10 year olds, 12% of 15 year olds, and only 2% of adults think divergently. As a person gets older, he or she is taught to use both divergent and convergent thinking at the same time. The result is one criticizes and judges ideas before they can fully develop.

Of peculiar interest to me has been what independent game designer Jonathan Blow—who worked on his successful and influential game Braid for 3.5 years—has said about creativity and surviving ambitious projects. (Maybe someday Rich will talk about how he survived his own ambitious projects: how to maintain motivation day-to-day, how to fund it, how to plan and pace it, how to finish it.) The thoughts about ambitious projects are for another time, but what he says about creativity should be familiar by now. Blow says metaphysically you may not buy into the Greek concept of the Muse—nor may he—but functionally it is real. Creativity feels like something external, and you have to get yourself into a relaxed mode to provide opportunity for new ideas, though you cannot guarantee anything.

Tools and Techniques


I hope to find more resources on direct techniques for stimulating creative (e.g. instead of thinking about solving a problem think about how to make in worse and avoid that), but for now I've found a lot of agreement about how to encourage creativity in an indirect way.

Obsess about your problem. If your subconscious mind (or unconscious mind or background mind or whatever you want to call it) is going to solve a problem for you, then it needs information. Rich has a lot of great advice about this. Write down your problem. Write down what you know. Write down what you don't know. Read about your problem. Read about related problems. Pick apart other solutions. Paul Graham in "The Top Idea in Your Mind" says, "It's hard to do a really good job on anything you don't think about in the shower."

Relax. For Rich this is lying in a hammock and focusing, thinking through all the information you've loaded into your mind. For Blow, a relaxed state of mind is really a pretty active body. He likes to find something purely physical that he can enjoy, like going to a club and dancing. Cleese creates an oasis blocking off time and setting aside other concerns. He gives himself enough time that he can work through all the TODOs that pop into his head. He writes them down for later, and gets back to being relaxed and playful.

Pace yourself. Cleese recommends, if you're going to try to set aside time for creativity, to limit it to no more than an hour and a half, because you'll need a break. If you need more time, then do it again the next day.

Be Playful. George Land found that children are more creative. Cleese finds being in a playful mood conducive to creativity, especially when collaborating with others. Play, imagination, daydreaming all come from or lead to a relaxed state of mind, which accesses your creative mechanism.

Write things down. Rich is big on this. There are several benefits: it helps you think thoroughly, it helps you remember things, it is easy to skim for recall.

Gently keep your mind focused. Cleese says to be successful you must keep your mind gently around the problem. You may wander off, but gently come back to it. Rich uses hammock time not just to relax, but to recall information. Touch each fact with your mind to keep it fresh, and to make it interesting to your background mind.

Have a dogged persistence. Cleese sticks with an problem, and doesn't just take the first idea he comes up with. Sometimes a creative breakthrough requires persisting through the discomfort, even slight anxiety, of an unsolved problem. Rich reminds us that since this is an indirect process it may take days, months, or years for a solution to come.

Anti-techniques


How can you destroy creativity? Easy:

Chase success. Paul Graham says the way to destroy your creativity is to make money the top idea in your mind. It tends to consume all your mental energies. Blow also warns about thinking about success or how others will judge what you do. These things can easily lead to fear, and as Cleese says you need to feel confident to be able to generate ideas.

Obsess about disputes. Paul Graham talks about how Isaac Newton got involved in disputes and regretted the wasted energy. This is really just another form of worrying about what other people think.

Make a schedule. Blow warns about making a schedule, but also admits that we must all deal with schedules. Rich says his techniques don't work under pressure. While Cleese sets aside time to be creative, he recognizes that the process is unpredictable and needs time.

Pre-judge ideas. You must be open, Cleese doesn't call it "open mode" for nothing. Brainstorming forbids judging ideas, and as a technique it gets that much right. George Land found the more we use divergent and convergent thinking together—in other words the more we try to pre-judge ideas—the less creative we will be.

Get distracted. Cleese says you need to create a space free from distractions. For Blow, even the threat of a distraction can prevent him from relaxing, so he'll even spend time at a coffee shop for a few hours before heading into the office.

No humor. According to Cleese humor is about two frameworks coming together to make new meaning, and this is also the core of creativity. If you eliminate humor, then you eliminate creativity.

Live actively and urgently. If you want to ensure no relaxation happens, if you want to ensure that you are in closed mode, then live urgently and actively.

Conclusion


If you're here you are probably a computer programmer (most likely a Clojure programmer). That means you're probably a bit like me. You're good at thinking analytically and logically. You're good a judging solutions based on correctness, performance, etc. You're good at operating in "closed mode." These are great skills, and as Cleese says we need both open and closed mode to succeed, open to generate ideas, and closed to execute on them. We just may need to work on the open mode a bit.

You have the ability to be creative. You have a relaxed, curious, playful, imaginative self. There are some techniques that others have used that may help you access your creativity. They may help you, they may not. You may need to experiment a bit for yourself.

You cannot fully control this process. You can only indirectly stimulate creativity, and you cannot guarantee that your mind will solve the problem you want it to solve. One approach would be to work on several problems at once! You may also find some fruitful connections between the problems.

To be creative you must be persistent, and you must practice. I hope this helps you find those imaginative solutions.

Monday, May 21, 2018

Gardening

If you're like me, you spend 8+ hours a day in front of a screen. About five years ago, I decided that I needed better hobbies than learning new programming languages and writing code for personal projects. I wanted find ways to learn new skills and connect with people. I've done that by playing board games at local meetups and building a robot, and I've done that with gardening.

Gardening has been incredibly frustrating and incredibly rewarding in a roller-coastery kind of way. I'd like to share my journey with you in an attempt to get you interested in gardening. I'll share some resources I've found interesting and useful.

Why gardening?


I chose gardening for many different reasons. I remember my parents having a garden when I was a kid, and I wanted to have a hobby that my kids could be involved in and excited about. I like to eat things like tomatoes that my wife does not often buy, because no one else (including her) likes them. I wanted to do something outdoors. I wanted to become a little more self-sufficient.

Those are some of my reasons, but maybe you have other reasons. Maybe you'd like to reduce your carbon footprint by producing your own food that doesn't get shipped half way across the world. Maybe you like the idea that food from your garden is essentially tax-free income. Maybe you want to increase the diversity in your diet and/or help preserve and conserve heirloom food varieties that are endangered. Maybe you don't want to grow food but flowers providing you with a vibrant, delicate beauty.

How gardening?


There are many ways to garden from containers to raised beds. One of the things I enjoy about gardening is an entire world of new things to learn. It is a gateway hobby into things like cooking, canning, composting, carpentry, and other words that begin with 'c'.

I have focused mostly on fruits and veggies, since I want to be able to eat from my garden, but I've also grown (and grow more and more) flowers. I've grown some edible flowers and some inedible. It is incredibly satisfying to have some color around the house.

I started small with some containers on my deck. I used a couple of EarthBoxes, then built my own DIY EarthBoxes. I like the sub-irrigated planter (SIP) concept so much that I'm planning on putting in a raised bed SIP in my backyard, automatically fed by rain barrels. If you want to learn more about SIPs, check out https://www.youtube.com/albopepper.

Gardening (like most hobbies) can be as expensive as you let it. You can buy all kinds of gardening gadgets and gizmos. One of my goals is to make gardening as economical as possible. To garden you need:

  1. Plants
  2. Sun
  3. Water
  4. Nutrients

The sun part is pretty easy, since my back yard is south facing. I just need to work around the shadows cast by trees and the deck.

You can buy seeds pretty cheaply, but you can also harvest seeds from your plants, so you don't have to continually buy seed packets. This will only work with open-pollinated (OP) plants. Check out this video to learn about OPs, hybrids, and heirlooms: https://www.youtube.com/watch?v=zkMEmkecSHs Often, it is easier to buy seedlings at a nursery or farmer's market.

You can also plant perennials like strawberries and asparagus. These don't need to be replanted every year. You plant them once and you can harvest for years.

You can obviously water your plants with your tap, but rain barrels are a way save money taking advantage of an abundant resource over our heads. You can buy rain barrels, or you can make your own. My water company even gives a $30 rebate each for up to two rain barrels that I install.

Plants need nutrients, and nutrients can be provided by fertilizer. I still use fertilizer occasionally, but I've opted to make my own compost. Unfortunately I don't have many trees whose leaves I can compost. This is usually the easiest way to make compost. However, I am composting what leaves I have along with grass clippings and cardboard boxes from all my Amazon Prime orders. I compost trimmings from my garden, and kitchen waste. I'm even thinking about getting some composting worms! Here is a video about how ridiculously easy it is to compost: https://www.youtube.com/watch?v=n9OhxKlrWwc

Lessons Learned


I've been gardening about five years, and here are some lessons I've learned.

Time and timeliness.

As a software engineer, I work in a field where I'm constantly learning, and there's a new JavaScript framework every week. I enjoy being more aware of the weather and seasonal rhythms. Plants work on a different timescale. If something goes wrong with the crop this year, I may have to wait another whole year to try again. That can be frustrating, but it can also be an opportunity both to think over a longer timescale and to be very focused on what is happening right now because the stakes are high.

Everything wants to kill your plants.

In container gardening on my deck I've dealt mostly with insects, and there are billions of them. When I moved into raised bed gardening with my strawberry patch, I had to deal with deer eating all the leaves off my strawberries. For the past couple of years it has been impossible for me to grow zucchini or squash, because vine borers have eaten them from the inside out. I'm not necessarily a fan of squishing bugs, but there was nothing more satisfying than digging those buggers out and squishing their fat bodies. It was a kind of anger management program.

The lesson is you need to think about pest management from the beginning. Talk to your neighbors about what pests they've dealt with in their gardens. Or at least be prepared that the first year could be rough until you know what you up against. When you do know what you're up against...research! If you live in the US look up your local cooperative extension website. Virginia's has all kinds of great publications for growing things in my region.

Your plants want to live

Even the sun can sometimes be brutal on your plants. I tried seed starting a couple of years ago. The last step is to "harden off " your plants by gently exposing them to the elements. I was a little less than gentle and nearly killed my plants.

After the hardening off incident I felt like a bad plant daddy, but the amazing thing was my plants came back. They want to live. They are partners in this gardening adventure.

It is satisfying to make things grow

It can sometimes be difficult to diagnose what is wrong with a plant: is it overwatered, underwatered, missing some nutrient, etc? Plants are complicated yet fascinating living things. It is worth the effort to understand them and work with them. One of the most fascinating books I've read is Botany for Gardeners by Brian Capon http://a.co/7SSM1Wi. I really enjoyed Brian's writing style, and it is a very approachable introduction to cellular function, propagation, and the fascinating life of plants.

In the end there is a lot to learn, and it is hard work, but it is so satisfying to nurture a living thing.

It is satisfying to work hard

I have a personal rule for myself that as much as possible I will refuse to have someone else mow my lawn. It saves money. I listen to podcasts and audio books. I like to walk around my house and property (only 1/3 acre but still) and see how things are doing. It can be hard work since my yard is mostly a hill, but I like to get the exercise.

Gardening can be hard work, too. One Sunday afternoon, in addition to mowing and edging, I pulled out two bushes (which if you've ever done, then you know), and planted an apple tree and six red raspberry canes. I was sunburnt and sore, and paid for it the next day, but it was satisfying, and I'm looking forward to the fruit of my labor (literally!).

Play the odds

I recommend starting small, because like any hobby you can get excited and spend a lot of money before you realize it. However, you also have to know that gardening is about playing the odds, so don't start too small. When you start seeds, you put three in each hole, and when they sprout you thin them down to just the strongest of the seedlings. If you buy tomato seedlings from a nursery, don't just buy one, buy two or three. You have to expect that some plants won't survive.

It can also be helpful to plant more than one kind of thing. You may not get everything you want, but you should plant a diverse mix of plants and enjoy whatever you get. If you only plant cucumbers, then horde of cucumber beetles can destroy everything, but if you also have tomatoes, then it's not a total wash.

Conclusion


Have I accomplished my goal of learning new skills and getting to know people? Absolutely! Of the five houses that border mine three are gardeners, and when I'm out early in the morning tending my garden my neighbors are often out, too. I've had chances to get to know them.

I've gotten outdoors. I've gotten plenty of exercise. My kids are involved and excited about gardening. They even eat things they normally wouldn't, because we've grown them ourselves.

If you want a hobby to get you away from the screen and doing something physical in the real world, then give gardening a go.

Thursday, September 7, 2017

The Ethics of Software Quality

Security professionals are in a hard place. If there is a security breach, they take the fall. However, if they do their job right, no one notices. Further, they may even meet resistance to doing their job right because they are being overly cautious, taking too much time, costing too much money, etc., etc.

I think a software professional who wants to create quality software faces the same challenge. You may deliver quality software, but then get accused of taking too long (according to some arbitrary idea someone has) or "gold plating." You get compared to co-workers who write code much faster, even though it may have more bugs. Focusing on speed as a primary metric for software development is a race to the bottom.

This is not to say that there aren't times when something needs to be timeboxed, or a programmer needs to resist "gold plating." It is possible to fall into a trap of tweaking and refactoring ad infinitum. However, I don't find that there is a bright line or objective standard for judging this. Maybe that is because I believe software development to be a creative, exploratory process, so I'm apt to think there's more than a little taste and discernment.

To produce quality software you must take an ethical approach. What do I mean by this? While it seems obvious that there are ethical issues in software development---for example poor quality software wastes time and money, causes frustration, and in the extreme case can cause damage to property and loss of life---that's not what I mean.

What I mean by "ethical approach" (and maybe there's a better term for it) is you must have an intrinsic motivation to create quality software. You have to do it because "it's the right thing." You will rarely get support from managers to produce quality software. You will shoulder the blame for quality issues in your code. If your code is beautiful and functional and bug-free, rarely will anyone even notice, let alone commend you.

How can you develop a "software quality conscience"? I don't have all the answers, but I have a couple of suggestions:
  1. Read good code and read about good code. If it is garbage in, then it will be garbage out.
  2. Surround yourself by other people who care about quality. Find a team of like minded people whether it is at work or not.
  3. Keep things in perspective. I find, as I'm further into my career, that I've had bosses bluster at me to get things done by a certain time ("do or die"), and found that it didn't really have a huge impact on the success or failure of my project or company. Don't be insubordinate or lazy, but don't buy into the hype. Be realistic.
 You are responsible for fighting the good fight. So step up.

Monday, February 27, 2017

Continuous Planning

"In preparing for battle I have always found that plans are useless, but planning is indispensable." -- Dwight D. Eisenhower

There is a tension between engineering on the one hand, and on the other hand those who would like to know when the task will be done. A product must be marketed, documented, sold, and supported. "When it's done," is useless when you're trying to sell to a customer against a market full of competitors. However, the software we write gets more complex each day, and the process for bringing it to life is complex. Complexity means unknowns, and unknowns mean uncertainty. A software project is like a hurricane with a cone of uncertainty preceding it. This tension between the desire to know and the reality of uncertainty is a fundamental part of working a software project (and probably other kinds of projects).

Before going too much farther I will state my assumption: a completion date is an output not an input, and the most effective tool for managing a completion date is changing the amount of work you want to do (i.e. "scope").

You cannot take a date and work backwards. This is no different than taking a date and working forwards. Well actually, there is a big difference. In working forward, you can always push the completion date out. In working backwards you cannot start any earlier than now. The completion date inevitably follows from when you start, how quickly you can work, and how much you are trying to do.

You can spend money on tools, training, consultants, but these each have a time cost.

You can add more people, but in order to establish a context on a project a new person must learn a code base, tools, technologies, personalities of the team, and to do so he or she must take time from an otherwise productive member of the team.

You can have the current team work overtime, but too much of that will cause quality issues and burnout.

You can relax expectations about quality, but that is just trading your future time to get something done more quickly and temporarily.

The best thing you can do to manage a completion date, is to cut the amount of "stuff" you are trying to do, or to rearrange the order of when you will do it, so you get the things you want earlier than you otherwise would have.

Given that a date is an output, as engineers and managers we try to navigate this tension between the desire to know and the reality of uncertainty with planning, but there's a problem with plans: they're useless. Imagine planning a single task. When will it be done? Well, if you ask one engineer she will give you an estimate based on her skill and experience. If the task is ever given to another engineer, then that estimate is invalidated. On top of that, an engineer (or human really) is notorious for estimating only the amount of work she must do. She doesn't think about QA testing, deployment, and data migrations, among other things. Nor does she think to factor in overhead like meetings, filling out time cards, learning new skills, bonding with coworkers, etc.

That is just at the most atomic level of estimation. Once you start to think about collaboration things get more complex. Does our engineer need to get a review from a coworker? That coworker is now being taken off of his task to do the review, which can lead to delays. What if our engineer needs assistance from someone more familiar with a particular technology or part of the code base? What if our engineer wants to brainstorm with another engineer? If our engineer gets delayed then any tasks that were dependent on her task also get delayed.

Now imagine making a plan for a product that spans several teams and tens or hundreds (or thousands??) of people. If you don't know everything that everyone is working on and how they are all related, then you can't with certainty plan out anything, and that is only taking into account everything that can be known, there are still unknowns (like if someone may get sick). This is the uselessness of a plan.

Well, a plan is not entirely useless. It is probably very accurate for the tasks that will be started in a few days, but entirely inaccurate for the tasks that will be started in three or six months.

So, there are two problems with a plan: 1) it must be updated to reflect new information, and 2) it fails to take into account the "cone of uncertainty."

Updating a plan seems easy enough, however the larger the plan the more work it will take to keep updated. One could certainly employ an army of project managers who verify that the task breakdown, estimates, and dependencies have not changed; that you've taken into account every meeting, vacation plan, all the testing, deployment, and overhead. Ideally the plan would be updated continuously (so its more of a "dashboard" than a "plan"). More valuable than knowing that three months ago we thought a task should be complete on such and such a date would be knowing when we think it will be completed as of now with all the latest information we have access to, but that would create quite a drag on the entire team.

Even if you could keep the plan up-to-date, it gives the false impression that one can know precisely when a task will be complete. You may be able to predict the completion date of a task that starts tomorrow, but not for a task that will start in three months. Three months provides plenty of time for both knowns and unknowns to change when the task could even start, let alone when it would complete.

Usually, this uncertainty is handled by "padding" the date, but this is not enough. A single point-in-time completion date conveys certainty, and this is certainly wrong. The completion of a task should always be a range, one that is narrow for the near future and wide for the distant future.

Incidentally, I think even agile burndown charts get this wrong. In my opinion, there is (and should be) variability to a team's velocity. Simply taking some velocity value and running it out a few months to predict a single point in time when a task will be complete is at odds with reality.

What does Continuous Planning look like? Well I don't really know, because I just made it up! At a high level I would summarize it as: plan using real data, with task completion ranges, over as long a term as you want, in aggregate, on average, continuously. The task completion ranges are the key. You can plan over as long a term as you want, however, the ranges will get wider. If you can reduce the variability in your process---and prove it with your data---then you can narrow the ranges. Planning is done in aggregate and on average, because it is impossible to know and manage every possible factor, so we must abstract away much of the minutiae. Finally, to plan continuously implies some kind of tool to facilitate.

To the extent that I have thought about how this would work out practically, this is what I would do:

Each team would estimate their tasks by each member recording the number of hours he or she thinks it would take him or her to complete the task, given that there are no other distractions. This is a kind of pure estimation that engineers usually make. It would be helpful to discuss the task as a team, and try to elicit different opinions on the complexity of the task, so the estimates will be as complete as possible.

Why not use story points? I have been a fan of story points precisely because they abstract away hours. Hours can vary depending on a persons skill and experience. Hours can get lengthened by interruptions and discovering additional complexity. Hours give a false impression that they would map directly to calendar time, and you can accurately predict when a task will complete.

However, the first thing a person asks is how long does X points take. Usually you have to pick as a standard comparison some "golden story" for a certain number of points. People will usually consciously or unconsciously come up with some rule of thumb like, "an eight point story should take about a sprint to complete." So in the end you are estimating in hours, but they're a convoluted form of hours.

Hours are a natural unit for estimations. The danger in using hours is actually trusting the estimate for a precise completion date. We've already rejected precise completion dates with Continuous Planning, and the rest of the process is designed around (automatically) finding an accurate scale with which to judge these estimates. I would actually advocate that the estimates and velocities be hidden variables, and (other than your own estimate) you only see the completion range for a task. This would hopefully reduce some confused expectations around what it means for an estimate to be denominated in hours.

The estimates from each team member would be combined together into a single estimate for the task. The method for combination could be taking an average. It could involve throwing out extreme values first, or doing some sophisticated statistical analysis.

Having done these estimates, a task tracking system would keep track of when tasks started and when they completed, or how long they've been started even if not complete. This actual data can be used to calculate a velocity. The velocity could be calculated at several levels. You could calculate the velocity for an individual task, for a particular team member, for the team as a whole. You could even calculate the velocity for a feature epic cutting across several teams.

To calculate a date range for completion, you can take the average plus or minus a standard deviation for a single velocity calculation over time and get an optimistic and pessimistic velocity. You could get an optimistic and pessimistic velocity by taking the minimum and maximum of the most recent velocity calculations at two different level (task and team, for example). I'm not sure which would work best; it warrants some research.

Tasks would be a hierarchical tree. An epic is really just a task with subtasks. The velocities and estimates can flow up the tree for the purposes of calculating estimated completion ranges for epics.

If you wanted to get fancy, you could draw dependencies between tasks, and the system could then attempt some kind of topological sort of the tasks, and using a prioritized backlog and team assignments to each task, construct a plan for what could be done in parallel, and---based on velocities calculated from real data--calculate a completion range for each task, epic, and the project as a whole.

As you can see there are still many questions to be answered. I think this is an idea worth exploring. In my experience, the usual tools fall flat at resolving the tension between the desire to know and the reality of uncertainty.

To effectively attack this tension requires abstracting away much of the minutiae of detailed planning by embracing the variability of the process. The plan is useless. Planning is indispensable. Therefore, plan continuously

In preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html
In preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html
In preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html
In preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html
In preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html
n preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html
n preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html
n preparing for battle I have always found that plans are useless, but planning is indispensable.
Read more at: https://www.brainyquote.com/quotes/quotes/d/dwightdei164720.html

Tuesday, August 30, 2016

"Clojure Polymorphism" Released!

From my new blog Real World Clojure. What am I doing with this new blog? I have no idea, but you can follow along.

~ ~ ~ ~

I have released a short e-book (30 pages) titled "Clojure Polymorphism." You can get 50% off by using this coupon link http://www.leanpub.com/clojurepolymorphism/c/ONeJZ629Isy7.
What is this book about?
When it comes to Clojure there are many tutorials, websites, and books about how to get started (language syntax, set up a project, configure your IDE, etc.). There are also many tutorials, websites, and books about how language features work (protocols, transducers, core.async). There are precious few tutorials, websites, and books about when and how to use Clojure's features.


This is a comparative architecture class. I assume you are familiar with Clojure and even a bit proficient at it.  I will pick a theme and talk about the tools Clojure provides in that theme.  I will use some example problems, solve them with different tools, and then pick them apart for what is good and what is bad.  There will not be one right answer.  There will be principles that apply in certain contexts.
I this installment, I will pick up the theme of "Polymorphism" looking at the tools of polymorphism that Clojure provides. Then I take a couple of problems and solve them several ways. At the end of it all, we look back at the implementations and extract principles. The end goal is for you to develop an understanding of tradeoffs and a taste for good Clojure design.


I have some ideas for other e-books. Perhaps a concurrency tour of Clojure taking a look at futures, STM, reducers, core.async, etc. Or maybe talk about identity by looking at atom, agent, ref, volatile!, etc. Or maybe look at code quality tools. Or how to organize namespaces. Or adding a new data structure with deftype?

What would you like to see? Contact me. :)

Friday, August 19, 2016

Reducible Streams

Laziness is a great tool, but there are some gotchas. The classic:

(with-open [f (io/reader (io/file some-file))]
  (line-seq f))

line-seq will return a lazy seq of lines read from some-file, but if the lazy seq escapes the dynamic extent of with-open, then you will get an exception:

IOException Stream closed  java.io.BufferedReader.ensureOpen (BufferedReader.java:115)

With laziness, the callee produces data, but the caller can control when data is produced. However, sometimes the data that is produced has associated resources that must be managed. Leaving the caller in control of when data is produced means the caller must know about and manage the related resources. Using a lazy sequence is like co-routines passing control back and forth between the caller and callee, but it only transfers control for each item, there is no way to run a cleanup routine after the caller has decided to stop consuming the sequence.

A Tempting Solution

One might immediately think about putting the resource control into the lazy seq:

(defn my-line-seq* [rdr [line & lines]]
  (if line
    (cons line (lazy-seq (my-line-seq* rdr lines)))
    (do (.close rdr)
        nil)))

(defn my-line-seq [some-file]
  (let [rdr (io/reader (io/file some-file))
        lines (line-seq rdr)]
    (my-line-seq* rdr lines)))

This way the caller can consume the sequence how it wants, but the callee remains in control of the resources. The problem with this approach is the caller is not guaranteed to fully consume the sequence, and unless the caller fully consumes the sequence the file reader will never get closed.

An Actual Solution

There is a way to fix this. You can require the caller to pass in a function to consume the generated data, then the callee can manage the resource and execute the function. It might look something like:

(defn process-the-file [some-file some-fn]
  (with-open [f (io/reader (io/file some-file))]
    (doall (some-fn (line-seq f)))))

(process-the-file my-file-name do-the-things)

Once upon a time clojure.java.jdbc used to have a with-query-results macro that would expose a lazy seq of query results, and you had these resource management issues. Then it was changed to use this second approach where you pass in functions.

There is a hitch to this approach. Now the callee has to know more about how the caller's logic works. For instance, in the above code you are assuming that some-fn returns a sequence that you can pass to doall, but what if some-fn reduces the sequence of lines down to a scalar value? Perhaps process-the-file could take two functions seq-fn and item-fn:

(defn process-the-file [some-file item-fn seq-fn]
  (with-open [f (io/reader (io/file some-file))]
    (seq-fn (map item-fn (line-seq f)))))

(process-the-file my-file-name do-a-thing identity)

That's better? I still see two problems:
  1. The caller is back to having to know/worry about resource management, because it could pass a seq-fn that does not fully realize the lazy seq before it escapes the with-open
  2. The logic hooks that process-the-file provides may never be quite right. What about a hook for when the file is open? How about when it is closed?
I could argue that this whole situation is worse, since the caller still has to worry about resource management, and now the callee has this additional burden of trying to predict all of the logic hooks the caller might want.

An additional design consequence is that you are inverting control from what it was in the lazy seq case. Whereas before the caller had control over when the data is consumed, now the callee does. You have to break your logic up into small chunks that can be passed into process-the-file, which can make the code a bit harder to follow, and you must put your sharded logic close to the callsite for process-the-file (i.e. you cannot take a lazy sequence from process-the-file and pass it to another part of your code for processing). There are advantages and disadvantages to this consequence, so it is not necessarily bad, it is just something you have to consider.

Another Solution

We can also solve this by using a different mechanism in Clojure: reduction. Normally you would think of the reduction process as taking a collection and producing a scalar value:

(defn process-the-file [some-file some-fn]
  (with-open [f (io/reader (io/file some-file))]
    (reduce (fn [a v] (conj a (somefn v)) [] (line-seq f))))

(process-the-file my-file-name do-a-thing)

While this may look very similar to our first attempt, we have some options for improving it. Ideally we'd like to push the resource management into the reduction process and pull the logic out. We can do this by reifying a couple of Clojure interfaces, and by taking advantage of transducers.

If we can wrap a stream in an object that is reducible, then it can manage its own resources. The reduction process puts the collection in control of how it is reduced, so it can clean up resources even in the case of early termination. When we also make use of transducers, we can keep our logic together as a single transformation pipeline, but pass the logic into the reduction process.

I have created a library called pjstadig/reducible-stream, which will create this wrapper object around a stream. There are several functions that will fuse an input stream, a decoding process, and resource management into an reducible object. Let's take a look at them:
  • decode-lines! will take an input stream and produce a reducible collection of the lines from that stream.
  • decode-edn! will take an input stream and produce a reducible collection of the objects read from that stream (using clojure.edn/read).
  • decode-clojure! will take an input stream and produce a reducible collection of the objects read from that stream (using clojure.core/read).
  • decode-transit! will take an input stream and produce a reducible collection of the objects read from that stream.
Finally, there is a decode! function that encapsulates the general abstraction, and can be used for some other kind of decoding process. Here is an example of the use of decode-lines!:

(into []
      (comp (filter (comp odd? count))
            (take-while (complement #(string/starts-with? % "1"))))
      (decode-lines! (io/input-stream (io/file "/etc/hosts"))))

This code will parse /etc/hosts into lines keeping only lines with an odd number of characters until it finds a line that starts with the number '1'. Whether the process consumes the entire file or not, the input stream will be closed.

Advantages:
  • This reducible object can be created and passed around to other bits of code until it is ready to be consumed.
  • When the object is consumed either partially or fully the related resources will be cleaned up.
  • Logic can be defined separately and in total (as a transducer), and can be applied to other sources like channels, collection, etc..
Disadvantages:
  • This object can only be consumed once. If you try to consume it again, you will get an exception because the stream is already closed.
  • If you treat this object like a sequence, it will fully consume the input stream and fully realize the decoded data in memory. In certain uses cases this may be an acceptable tradeoff for having the resources automatically managed for you.

Summary

Clojure affords you several different tools for deciding how to construct your logic and manage resources when you are processing collections. Laziness is one tool and it has advantages and disadvantages. It's main disadvantage is around managing resources.

By making use of transducers and the reduction process in a smart way, we can produce an object that can manage its own resources while also allowing collection processing logic to be defined externally. The library pjstadig/reducible-stream provides a way to construct these reducible wrappers with decoding and resource management fused to a stream.

Acknowledgments


Special hat tip to hiredman. His treatise on reducers is well worth the read. Many moons ago it got me started thinking about these things, and I think with transducers on the scene, the idea of a collection managing its own resources during reduction is even more interesting.