Historical Brewing 101: This Is Easier Than You Think

I’m liking this whole “write posts on Sunday” thing. I rarely have things that demand my attention on Sundays, so I often find myself sitting around with time to kill.

Perhaps I shall redirect that killing time to whitespace – fill the void with something useful. Slash and burn pixel forests with self-aggrandizing pontification on topics of incredibly specific interest and arcane origin. Type quickly and loudly simply for the sake of hearing the music of my keystrokes. Fill your eyes with needless words expounding far beyond the point of necessity, into a realm that can only be described as ego-stroking.

You love it and you know it.

So, basically, I think the Sunday post will become the norm. Adjust your calendars accordingly.

And yes, there will be a return to the every-other-week schedule soon. Moving to a new place imminently! Once life stops exploding on my face, we’ll return to the usual.

I expect you all are quivering with anticipation.

See, it's funny because I am also a jolly fat man. That's right, I'm fucking jolly.

See, it’s funny because I am also a jolly fat man. That’s right, I’m fucking jolly.

Recently, I was given the opportunity to teach brewing at an SCA event up north a bit from Albany. It was a great experience – I had several very eager students turn out to learn all about homebrewing from the very basics (as in, “yeast + sugar + water = booze” basic), and it was quite the educational experience for me as well. This is why we teach, after all – as we give knowledge to others, so we receive it in return.

Teaching these classes helped me codify a position I’d long been trying to express; I’d like to share that with you.

While the SCA has a fairly well-defined mission in “recreating the arts and traditions of pre-17th century Europe,” the truth is that the accuracy of said re-creation varies. A lot. A whole lot. The group casts a fairly broad net in attracting members, which is both its greatest strength and its greatest weakness – we can take in people with a diverse range of interests and cultivate their inner Medievalist geek, but we also don’t separate by interest level or expertise. Thus, you get a very mixed bag in terms of research interest in the group as a whole. It’s a byproduct of being inclusive, and it’s a good thing, I think. Certainly, there are groups which cleave to a higher standard of accuracy and authenticity, but we can find that level in the SCA as well – and you’re more likely to find the SCA than, say, Regia Anglorum.

But what I occasionally run into is a person who not only isn’t engaged in recreating the art of the era – they actively oppose it. “Pfft, why would I make period wines? They were probably crappy!” Or “I don’t care about doing research – I just want to make something delicious.” Or better still, “Research is too hard to do.”

“Research is too hard, so I’m just going to do whatever?” That’s crap. That’s like saying “I don’t want to base my observations on facts – I’d just rather pull things out of my ass.” That is an unfortunately common mindset in the world (one to which we are all vulnerable), and we can work to reduce it by engaging in critical analysis.

I think it’s rooted, in part, in the desire of creative people to invest themselves into a project, and in doing so add to their own social value. We put a lot of ourselves into, say, a song that we write or a beer that we brew. We take our ideas, and using our hard-won skill, translate them into another medium that may be consumed by others. There’s a lot of ego wound up in our creations. We share that with other people. “If they like this, then they’ll like me and they’ll see that I have value!” We ingratiate ourselves to a society by our contributions to it, and we take that to heart.

But when we work from another’s source material – try to recreate someone else’s creation – it seems to us that we no longer are conveying our own ideas. “These are someone else’s ideas! What if they’re wrong? What if their idea sucks? If I create that and promulgate it as my own, everyone will think suck!” We have to remove our own ego from such attempts at re-creation, because we need to think about how someone else did something – whether or not we think that’s a good idea. This creates a situation where someone may dislike a thing and direct that at us, while we stand there helpless trying to defend a thing that never came from us in the first place.

This can be daunting for many. We stand to lose investor confidence. Our social currency will weaken. Purchasing power declines. Our credit rating may be downgraded.  And so, we become fiscally conservative regarding our social currency – stick to what we know works, and don’t take risks.

It’s all a lie. As I will show shortly, the process of re-creating is one of translation – and any translation involves choices on the part of the translator. That’s where you get to invest yourself – but because people are unfamiliar with it, because it is a new direction of expression and investment, they become scared.

Let’s take a look at how to overcome such fears. What follows is an applied form of the scientific method, used to recreate an ancient wine; while I’m focusing on ancient wine, this principle can really be applied (in specific modified forms) in any area of life that involves analysis of information and synthesis of ideas.

What's the worst that could happen?

What’s the worst that could happen?

This is my process summarized:

1) Find a source

2) Identify and list critical steps

3) Ask questions and map possible answers

4) Continue asking questions until you can’t answer (or give up)

5) Pick your answers and justify your choices

6) Reassemble into a novel method

7) Experiment, document, and repeat

Most people who have done this to any extent will look at that list and go “Well, no shit.” This, however, is not always obvious to people, and there are a few other principles that we need to know going in:

  • Perfect replication is probably impossible.

In much the same way that science will never lead you to “100% certainty,” any attempt to replicate an item from history is inherently flawed. That’s OK – impossible goals are still useful, because they ensure that we’ll always  try.

  • Every step along the path can be useful.

This ties back to my “being wrong is good” argument. Even our failures will teach us valuable lessons, and if you’re following a tightly-regimented process, your learning pace will be greatly accelerated. The key is to remember that you will be making choices while also documenting alternate paths – so long as you do that, you will have a map that you can continue to explore, time and time again, until you have satisfactorily exhausted its secrets.

Let’s do this step-by-step:

1) Find a source: You can do this the hard way (see the “Brewing with Egil” series), or you can “cheat” and find something that you want to replicate. Let’s cheat! I’ll start with an ancient Greek technique for something called Coan wine. Keep that open in a new tab as you read the rest of this.

2) Identify and list critical steps: Here, I like to look for things that are both familiar and foreign. Reading over the recipe, it ends with regular wine production: remove the grapes, press them, store the juice. OK, so we know where we’re ending. The initial stages seem bizarre, though. Collect seawater? Dry the grapes? What? It doesn’t have to make sense right now – what you need to do is cobble together a list of summarized steps:

  1. Obtain seawater with sediment removed.
  2. Pick very ripe grapes that have dried after a rain.
  3. Dry grapes: in the sun for 2 days, or outside generally for 3.
  4. Take 10 quadrantrils of seawater and 40 quadrantrils of grapes, in a container that can just hold all of it.
  5. Soak the grapes for 3 days in the seawater.
  6. Remove the grapes, press in the treading room, store the juice.

As I said, you don’t need to know what anything means just yet. In fact, it’s better that you don’t – put everything that might be relevant into your summary, and keep the original source handy in case you missed something. You’ll also need the originals for the next step.

3) Ask questions and map possible answers: What do I mean here? Let’s start asking a few and you’ll get the picture.

Start with step 1, the seawater. I’ll just start asking questions that come to mind:

  • Where is the seawater from?
  • What is the salt concentration of the water?
  • Are there other minerals leftover?
  • What kind of jar is the water stored in?

Now, start answering questions, and “mapping” different possible answers. Sometimes, a question has a fairly straightforward answer – but sometimes, multiple possibilities appear equiprobable. Put it all down. Note your sources – you can ask questions about those too.

  • “Coan” wine, after some searching, comes from the Greek island of Kos, which is just off the western coast of Turkey in the Aegean Sea.
  • Global average ocean salinity is 3.5%. Around Kos, it’s 4%. Source.
  • Hm. I have no idea. Seawater has a lot of stuff in it, but I don’t know what will settle out and what won’t.
  • There could be several answers here. We know the Greeks used clay amphorae. They also used wooden vessels. They also used leather vessels. All are possibilities.

To start building your “map,” try to visualize the central step about which you are asking questions. Put this first round of questions around that step as radiating lines, and add answers to the ends of those lines. Something like this:

I don’t usually draw a literal diagram like this – this is just how I visualize my process, and how I engage in questioning. The crucial part is to leave your answers lying around as touchstones.

4) Continue asking questions until you can’t answer (or give up): Now you can start asking questions about your answers, and building a web around those. Maybe you pick the “Wood, Leather, or Pottery” answer and start asking questions about it:

You can do this forever. I haven’t even begun to ask all the questions I could possibly ask – and that’s OK. That’s why you’re building the map – you research something until a point where you need to stop (or wish to stop), but leave the map around so you have something to come back to. It becomes a literal guide for your research that you can continue to reference repeatedly.

And with experimentation and learning, you’ll alter that map and find new directions!

5) Pick your answers and justify your choices When you stop, what you’ll have are basic steps, with a huge network of roads and resting points (questions and answers). Follow a road of questions to an answer that suits you, and justify your stop. Any reason is valid, so long as you’re honest about it. Remember, you’ve always got your map, so you can return at any point and keep going down that road.

Perhaps you want to pick one initial answer and explore it until you’ve exhausted all sources of information. Cool. Maybe you want to find enough information to allow you to replicate the product with stuff you have in your house. Also cool. Your purpose will guide the answers you pick, and remarking on why you went where you did will help you when you revisit this map – and yes, you will revisit it. Over and over again.

So remember, you’re in complete control of the journey. It goes as far as you want it to as fast as you want it to, and all steps along the way are valuable. In fact, small incremental experimentation is better than complex multi-variable efforts – it’s easier to analyze your information that way.

6) Reassemble into a novel method Now that you’ve got your answers picked out, you’ll put them together in a logical order, modifying steps as appropriate to incorporate your answers.

So maybe I figure out the exact salinity and mineral content of the Aegian Sea circa 200 BCE. Excellent. I have to somehow incorporate that information into my processing – perhaps I manufacture a purified “seawater” by adding chemicals to distilled water until I hit the right mineral profile.

Or maybe you decided, “Screw that, I live on Kos – I’m just going to go out into a boat and gather up water like they did.” So the boat, the voyage, the destination – those are all part of the method.

7) Experiment, document, and repeat By itself, this is a valuable intellectual exercise – but look, I’m a brewer. I make shit. What good is all this research unless we get something out of it. So we figure out what we’re doing and set up a small experiment.

I did this at the class I mentioned above. I purchased 2 2-quart bottles of concord grape juice (no preservatives), and after a bunch of research and extrapolation, I figured out that I needed to add approximately 35 grams of salt to those 2 quarts of juice to get the right salt content. I used hand-harvested Mediterranean sea salt, because that’s as close as I could get. I used one satchet of wine yeast. Added the salt to one of the quarts, left the other alone, added the yeast.

Easy, right? It would up being ~2 tablespoons of salt, so the rate of addition I figured out is 1 tablespoon of salt per quart of grape juice.

If you’re wondering, it tastes really really weird.

So I tried it, and I just told you what I did (documented) as well as my entire process. Now? I can go back to my map and try it all over again. Like I said, you don’t need to make a literal map – that’d drive you insane – but the principle is one to which you should adhere. Make sure you’ve always got something to go back to, so that you have a launching point for new experiments.

This is essentially an adapted and applied version of the scientific method. We observe, hypothesize, experiment, learn, and do it all over again.

You’ll never be “finished” with it, but you’ll sure as hell make some great progress – and some really interesting products – along the way.

So go forth and try shit out! Dare to fail! Make something really weird! Just make sure you take good notes along the way.

I really just wanted to post this .gif again. Let's say it's something about experiments in human psychology. Yeah, that sounds right.

I really just wanted to post this .gif again. Let’s say it’s something about experiments in human psychology. Yeah, that sounds right.

Back to the Drawing Board

To-date, the projects I’ve discussed on this blog have been successful, to one extent or another.

It seems disingenuous to have written an entire entry about the necessity of being wrong – of screwing it up from time to time – while also never telling you about the times that have screwed up. Had something go awry or unplanned. “Gang aft agley,” to use the words of Burns.

Even the greatest of us can’t be perfect all the time – though I try my hardest.

So let’s talk about salt.

If this was all I had to eat, I’d probably sail somewhere warm and murder a bunch of monks too.

Numerous texts about the Viking era make references to salt of some type or another. This is hardly surprising; salt is generally believed to have been an essential commodity in the ancient world. Hell, it’s pretty essential right now. In the time before refrigeration, salt (combined with other processing methods – typically drying) was really the only way to preserve  highly-perishable meats (and other foods). The average person in the Middle Ages probably didn’t eat a lot of meat as a result of that – it had to be prepared fresh, or else salted beyond most reasonable recognition and used in some other type of dish. This probably explains why roasted meats were often A Big Deal in ancient feasts – having access to fresh meat meant you were wealthy enough to afford it.

But “salt” is a pretty generic term that can be applied to countless products. When we use it, we tend to refer to the familiar processed purified sodium chloride table salt that we use with frequency. The scientific meaning of salt encompasses far more than just white cubes of flavor enhancement; virtually any solid product of a neutralization reaction is a “salt” of some kind.

I’ve also discussed the error of bringing pre-conceived notions into historical research. OK, so we know “salt” generally means some kind of stable ion-anionic compound. That’s probably good enough to figure out what they meant when they talked about “salt” in the Viking age, right?

Maybe not. From a 15th century manuscript in Dublin (23 D 43 from the Royal Irish Academy), we have the Old Icelandic Medical Miscellany, and its recipe for something called “Lord’s Salt:”

One shall take cloves and mace, cardamom, pepper, cinnamon, ginger an equal weight of each except cinnamon, of which there shall be just as much as of all the others, and as much baked bread as all that has been said above. And he shall cut it all together and grind it in strong vinegar; and put it in a cask. That is their salt and it is good for half a year.

Wait, what? There’s no mention of anything salt-like in here – just vinegar, bread, and spices.

Remember that our knowledge of ancient methodology is conveyed through translation. At some point, a translator simply has to decide what word best fits the “sense” of what he’s reading. Since salt was used primarily as a preservative in the ancient world (and not a seasoning as is its primary use today), it is conceivable that the word we’re translating as “salt” might mean something more akin to “preservative.”

Indeed, that Lord’s Salt is just a pickling solution – but it would accomplish the goal of preserving whatever went into it. We can see how something as simple as “salt” can be far more nuanced than the translation would lead us to believe.

Pliny documents several kinds of salt, some of which sound like our modern sea salt and others of which sound like other types of mineral salt. He even notes that some people make salt by pouring seawater over burning wood and coals. These various processing methods would all introduce different minerals to the “salt,” which would account for the variety that he documents. Contamination with vegetative matter (i.e. burned wood) would create an alkaline salt with a probably significant nitrate or nitrite content. Some methods involve boiling down seawater over fires; this would likely add a smoky character to the salt, as well as some lighter vegetative contamination.

1000 years from now, some poor bastard is going to do half-assed translation work and make the most interesting ham ever known.

So what did the Vikings probably do? Well, there are several words related to salt in Old Norse:

  • salt-brenna: translates to “salt-burning” and was often used as an insult
  • salt-fjara: A salt-beach, where the burning supposedly happens.
  • salt-ketill: A “salt-kettle,” possibly used in salt production. Maybe a vat to hold seawater which is boiled down over a fire – see below.

“Salt-burning” is an interesting concept. One can see how pouring seawater over burning wood may be thought of as “salt-burning.” “Salt-kettle” makes me think that a kettle may be involved, and Pliny does refer to salt-pans used to boil water – so that’s possible. It may also be a pot to hold something as it burns into salt.

The “salt-beach” is interesting because it has a possible connection to kelp-burning, which has been practiced in Scotland for a while and was allegedly practiced in Iceland as well. This too would make sense as “salt-burning,” and we know that plant ashes contain a variety of salt compounds – so it could actually have a preservative effect. I imagine Icelanders building fires on the shores where they’ve collected kelp to dry, and burning the dried kelp to ash. Totally plausible.

So I drilled down and decided to focus on the kelp-burning thing, because it sounded the most wonky and interesting to me. There’s a company in northwest Iceland that harvests kelp for commercial use, called Thorvin; I got my hands on some of their animal-feed meal (I’m going to burn the hell out of it, so who cares if it’s for people or animals), and set to work.

Burning kelp meal is difficult. I started off by trying to burn it in a heap, but the lack of proper airflow in the middle of a kelp pile stopped that from being viable. A friend of mine suggested that we “burn it like food” in a cast-iron pot. Brilliant! That even fits the “salt-kettle” model I’d mentioned before – perhaps this would be a viable method of producing salt.

I will tell you right now that blackening kelp meal smells awful. Imagine the smell of low tide – and then set it on fire.

We burned and burned and burned – and while we never actually combusted any of the material, we certainly got it to change:

Holy shit, it’s almost like I know what I’m doing.

As I said, pyrolysed vegetable matter should provide an appreciable nitrite or nitrate content, so this should hypothetically cure anything to which I apply it. The stuff tasted something like burned ocean – a bit salty, a bit smoky, and a bit gritty. Awesome! This should work!

I ran a little experiment:

From left to right: 160 g of beef treated with a salt/sugar mix (A), a salt/sugar/curing salt mix (B), the burned kelp (C), and not treated with anything (OK, maybe some salt and pepper).

A, B, and C were thoroughly rubbed with the mixes and left in the fridge overnight. The last one was simply left in the fridge overnight untreated. The next day, I rinsed and pan-fried all of them – the un-treated one is shown post-cooking already.

This is what I found for all three products:

The first pictures shows A and B from left to right. You can see that A simply looks like cooked steak (albeit salty and with a more turgid texture), while B has the distinct pink hue of a cured red meat product (with a salty, tangy flavor and turgid texture).

The second picture shows the test, C. The burned kelp.

It tasted like a plain steak. Nice soft texture, no saltiness. Nothing. Just like the control un-treated steak.

In other words, it didn’t work.

I thought this would make a “salt” much more akin to something I’m used to today, and all of my knowledge indicated to me that it was a likely outcome.

Now, a little more elaboration on what I learned from the experiment itself. The burned kelp, when it was applied to the beef, formed a sort of “coating” rather akin to a very thin latex paint. Parts of it would come away together. Interesting. It appears that the burned kelp “sealed” the steak, preventing any water escape – and presumably preventing any mineral diffusion. Upon further analysis, this isn’t actually that surprising, because the way in which we burned the kelp is quite similar to the way in which tar is produced. We effectively removed the moisture content and left behind incompletely-combusted carbon compounds – likely some various hydrocarbons.

Creosote is a similar byproduct of burning vegetative matter, and is the portion of smoke that is responsible for preserving meat. Creosote can build up because of the incomplete combustion of organic matter – much like we did with the cast iron pot.

And creosote was once used to preserve meat. It wouldn’t salt the meat – rather, it would “seal” it and prevent anything external from spoiling it.

This is why I promote the necessity of failure: because it simply shows us a different way of thinking about our ideas. As I look over my failed experiment – results that did not go as I thought they would – I realize that I’ve only just begun to poke at this.

Perhaps this is just another form of “salt” that would have been used – a hydrocarbon-heavy coating that would extend the “fresh” shelf-life of the meat in question, an early method of creosote production that was lost or superseded.

Perhaps I need to figure out a way to combust the kelp – burn it like fuel until I’ve got potash, and use that.

And there are other potential avenues for me to investigate still.

This is how science works. We observe, we hypothesize, we test, we re-design. Even in failing, we gain useful information that we can use to refine our positions – and often, failure helps us to realize that we’ve been stubbornly persisting on the wrong track for some time.

Plus, this gives me a great excuse to over-engineer some contraption to better light shit on fire. Who doesn’t love that?

The Little White Lie of Science

332206_3051047279678_612289447_o

IM IN UR LABZ, EJUKATIN UR FYOOCHER JENNERAYSHINS

There are many aspects of my job that I rather enjoy, but chief among them are the opportunities I have to educate groups of aspiring scientists. Our director makes a point of interfacing with local universities, giving students opportunities to learn about science on the ground – from actual scientists in a real-world setting.

For the past 5 years, I’ve given a tour and short lecture to the Microbiology class from St. Rose College in Albany. I use the opportunity to give them a real-life perspective on applied microbiology, demonstrating the ways that the techniques they learn every day can be put to use to solve actual problems that affect real people. I also use the time to expound on some of the more general elements of the biological sciences – without being too terribly political or biased. I try, anyhow. I’m only human.

In my most recent tour, I sort of expounded a bit on a topic that has been an interest of mine for a long time – that of the way we sell a science career to the bright and interested.

There is a certain romance, I think, when we talk about scientific work and the possibilities to change the world. No doubt, I wholeheartedly believe that the scientific method is the single most powerful cognitive tool humanity has yet devised, and I will defend that statement to my last. No system has generated so much sheer utility, nor improved the general conditions of so many by any metrics we care to establish. Sanitation? Medicine? You’re welcome for those, because it’s the only reason most of you are actually alive.

We tell people that with the vast powers of science, you can alter the course of history. You can topple nations, changes hearts, annihilate planets, uncover the very fabric of reality itself. That with sufficient examination and dedication, there is nothing beyond the ken of humans. That we can make ourselves like unto the gods that some of us still fear.

We take this romance quite far – nearly to whimsical levels. We venerate the work of great scientists in the same way we venerate stories of the heroes of old – Beowulf and Odysseus and Arthur and every other figure that we’ve built to be “larger than life.” It was Isaac Newton who famously said “If I have seen further, it is by standing on the shoulders of giants.”

They’re all about this real.

And this is what begat the little white lie of science.

I talked about the necessity of being wrong, and the way that most people (even scientists) are pretty bad at it. Scientists are probably better, but they’re still far from perfect – and that means everyone else is screwed, basically. And it’s a pretty terrible problem, really. It is empirically demonstrable that the less you actually know, the more you think you know, and are increasingly convinced of being correct.

It gets worse. Dan Kahan, of the Cultural Cognition Project at Yale Law School, has released some really depressing studies (though really really interesting) dealing with public perceptions of scientific issues in the US.

What we find in these and many other studies is the same story: people will accept or reject scientific evidence not on the basis of the evidence itself, but rather on existing cultural norms to which those people adhere. So if your cultural view is that evolution is fake and the Earth is 10,000 years old? Scientific evidence is astonishingly unlikely to convince you. Those who are less scientifically-minded do it more frequently than those who are more scientifically-minded, but the door still swings both ways.

The “little white lie” inherent to science is that empircal evidence collection in the testing of a hypothesis will lead to a well-supported conclusion…that we then accept. We reject our previously-held belief which is obviously wrong, and embrace the new truth.

It’s that last part that sort of underscores the whole thing – that makes it all worth the struggle – and that’s the part that isn’t quite true.

Dammit. I hate being right.

The truth of the matter – and this doesn’t just apply to the sciences – is that it is very nearly impossible to change the strongly-held views of any individual, even with the most rigorous set of facts and reason you can assemble. We simply engage in massive cognitive dissonance and assimilation bias, pick out the information we like, and go with that.

That means you. That means me. That means Professor Hawking. If you don’t believe in climate change, it is literally impossible for me to change your mind. I could throw a stack of research at you, and you will laugh it off because you know for a fact that I am wrong. Likewise, I cannot possibly conceive of evidence that would convince me of the existence of a god. If you showed me some, I’d probably dismiss it, because you can’t possibly be right.

Science will not effect change in the minds of individuals – but that’s not that surprising when we think about the principle of evolution. Evolution does not apply to individual organisms – that’s why the whole “why can’t you evolve a cat into a dog” line you sometimes hear is so laughably wrong – but rather, it applies to populations of those organisms over time. And even then, it’s not talking about radical abandonment of traits  – evolution discusses the frequency with which those traits occur in the population. So, if in 100 years the frequency of the alleles for, say, red hair in humans declines from 17% to 12%? Yup, evolution. Exciting, right?!

This is how the advancement of scientific knowledge actually works. It won’t change your mind, but given enough time and enough people, the population as a whole will shift in a direction that increasingly accepts something which is demonstrated to be factual.

There are no giants in science, nor in the real world. There are no great mythical heroes of power. There are no “amazing breakthroughs that will forever alter everything.” It doesn’t happen. That’s a fiction that we attach to history to make it sexy – giving us all a goal to set. The sad reality is that it’s easier to convince people in the fantastic ability of others to effect sweeping changes than it is to sell them the grey truth of a life of incremental progress.

We venerate scientists like Darwin and Newton and tell everyone about the great strides they made and how indispensable they were. The subtext is simple: “Hey, that could be you some day. Wouldn’t that be awesome?” Truth is, Darwin wasn’t even really Darwin, at least not as amazing as we built him up to be.

Instead of giants, progress is made by stacking regular people on top of each other, and periodically throwing a cloak on top of one of them. The guy whose head sticks up is lauded as a hero, and we call him a giant – ignoring the fact that he is supported by the increments of 10000 people before him.

So there you have it. Don’t go into science because you want to smash the world’s shell, or figure out the thing that’s going to revolutionize particle physics – because it literally doesn’t exist. We make that up to sucker you in and share the misery of our existence.

No, go into science because you give a shit, and  you want to engage in an enterprise that will, eventually, improve the lives of others.

If you’re lucky, maybe someone 50 years down the road will finally look at your life’s work and say, “Hey, there might be something to that.” 200 years later, you might be a dragonslayer or something.

Paying it Forward

So I wound up not having as much time as I thought I would this weekend – so no new content this week.

That frees me up to advertise someone else.

Waaaaaaaay back, when I first decided to start exploring Viking-era ale production, I ran across some archaeological work by a woman named Merryn Dineley. She’s done a lot of work on Neolithic brewing, and her thesis is one hell of a read. This work is a very large part of what inspired me to dive into this research, and I’ve had the pleasure of communicating with Merryn about her work over the past year or so – digging into the nitty-gritty of unearthing ancient brewing techniques.

Together with her husband Graham (a craft brewer of many years’ experience), they’re working on reconstructing a vision of ancient brewing all the way through the Viking age.

Some of you Facebookers may recognize those names – they recently published a poster summarizing their work in researching Viking brew houses. It’s been making the rounds on Twitter and such – I guess that’s what happens when you tell a bunch of archaeologists they’ve been wrong for years!

It’s funny – in my perusal of many archaeological publications, I’ve been largely underwhelmed by the “understanding” of brewing in the archaeological community. It’s pretty clear to me that the vast majority of these researchers aren’t brewers, and they very frequently don’t understand the science behind the process. Many of these papers are riddled with unfounded or erroneous conclusions, and there is insight to be gained with a more complete scientific understanding.

Merryn and Graham know their stuff. Their work is very interesting, and if my blog interests you, check out theirs too.

And if you’re really interested in experimental archaeology, you should check out the Experimental Archaeology Conference.

Brewing With Egil Part I: An Analysis of the Life Cycle of Barley

WARNING: Wall of text ahead

Before I embark on an explanation of the evidence in support of my hypothesis, it occurs to me that I may have a more complete understanding of barley biology than the average person, and very likely the average brewer. Since my hypothesis stands in opposition to some long-held knowledge and handling practices regarding barley and brewing, I gathered it might be prudent to start by going over some information about the development of barley, and its interaction with the malting process.

The Australian government has an excellent publication providing a fairly thorough overview of barley biology – primarily from the applied perspective of its role as a cereal crop. You can access it here. The University of Minnesota Agricultural Extension also features a fairly in-depth article.

In summary: dormant barley seeds germinate after soaking up water (a process known as imbibition), and being exposed to the right environmental conditions (temperature, oxygen, and soil pH). The early stages of germination (which we exploit during malting) don’t last terribly long when attempting to grow barley; shoot emergence can occur as rapidly as 72 hours post-imbibition, though exact time varies with variety as well as environmental conditions. Seedling development time (the point at which green leafy material is evident) varies as well, but generally, the seedling emerges from the soil in 10 days to two weeks.

Following emergence, the plant grows and develops multiple stems (tillering), which then begin to elongate. Field barley can have anywhere from 2 – 5 tillers per plant. Not all tillers develop the flowering structure called a “spike” (colloquially called an ear), but this varies with strain. Many modern barleys have been bread to have a high rate of spike development.

The spike is the flowering part of the plant. It develops, and once it flowers (releasing barley pollen), the “fruit” of the barley plant – what we know as a “berry” or “seed” – begins developing.

Barley seeds generally reach full maturity ~25 to 30 days after flowering. During maturation, the grain develops, begins to develop and store starch, and gradually dessicates. Once the seed no longer yields to fingernail pressure, it is considered ripe for harvesting. Dried barley enters a dormant phase, and when properly stored, dormant seeds can last up to 18 months.

What follows is a relatively complex analysis of the biochemistry of barley development. If you’re interested, read on. If not, skip to the end for my summary.

[HEREIN LIES A BUNCH OF SCIENCE]

The dormant seed is where we start the malting process. The importance of malting barley for the production of beer is widely understood, and most people understand the story in the same way; that is, during malting, we slowly and evenly take the grains through the early stages of germination, to develop enzymes that we will later manipulate in brewing. Those enzymes include proteolytics, to denature the protein matrix (called hordeins in barley, and broadly lumped in with the gluten proteins) that contains the starch; alpha- and beta-amylases, which convert stored starch into fermentable sugars; and debranching enzymes, which help “chew” the starch up into chunks that the amylases can more easily handle.

I was under the impression – as are many brewers – that malting is absolutely essential in order to develop the enzymes necessary in order to convert the stored starch to sugar. That is, until I learned about barley maturation in more detail.

As it turns out, mature barley seeds contain some completely functional beta-amylase enzyme. The linked paper shows that roughly 40% of the beta-amylase content of resting barley can be recovered with a saline solution. A survey of other literature appears to indicate that alpha-amylase is synthesized during maturation, and is not present in dormant grains.

The remaining 60% of beta-amylase in barley is present in a “bound” form – that is, it is attached to a larger protein inhibitor. Sopanen hypothesizes that the inhibition is likely due to steric hindrance – a phenomenon in chemistry where reactions are slowed because of the actual size and conformation of the molecules involved. In other words, 60% of the beta-amylase in mature barley seeds exhibits attenuated activity because there’s stuff in the way of the active site.

The activity of so-called “bound” beta-amylase was thought to be latent; however, Sopanen demonstrates that the enzyme can be as much as 70% as active as “free” beta-amylase. However, it also appears to matter little; the “free” beta-amylase content of ungerminated barley is sufficiently to convert the entire starch content of the seed – if the starch molecules are made available to the amylases.

Alpha-amylases are also bound with endogenous inhibitors. In this case, the inhibitor reduces the activity of alpha-amylase by nearly 90%. This is likely important to barley maturation; it has been demonstrated that premature alpha-amylase production leads to a reduction in seed size and starch content. This makes sense – alpha-amylase has a greater rate of activity against larger starch molecules than does beta-amylase.

It has been known for some time that gibberellic acid plays a crucial role in barley metabolism. Work by JV Jacobsen (over many years) has led to an in-depth understanding of the role of gibberellic acid in barley; he started by demonstrating that the application of GA induced the production of multiple alpha-amylases, and went on to study the hormone extensively.

So, at first glance, it appears that germination is required for the production of gibberellic acid, which is needed for the production of alpha-amylase. But the barley kernel has sufficient beta-amylase to allow for conversion prior to germination. What’s the deal?

We have learned – thanks to advanced technology – that the maturing barley kernel prepares for germination while on the ear. It does so by switching to a sort of “preparation” mode, wherein it generates thousands of mRNA’s (messenger RNA’s, generated from genomic DNA and sent to the ribosome for translation into proteins) and stores them. In addition, the barley kernel generates and stores gibberellic acid precursors prior to full maturation.

The full sequence is actually quite complicated. Abscissic acid (ABA, produced during maturation) and gibberellic acid have antagonistic effects – that is, they each cancel each other. This creates the possibility of a biochemical “switch,” where the synthesis of one hormone takes over the other and changes gene expression. ABA is responsible for inhibiting alpha-amylase production; the synthesis of GA precursors prior to that is what enables the activation of the enzyme.

In fact, barley kernels generate mRNA’s for all sorts of proteins prior to dormancy – the full machinery for the resumption of transcription/translation duties is available in the dormant, un-germinated grain. Dessication of the grain halts the normal activity of the growing grain – in fact, the data from Sreenivasulu et al suggest that there is little effective separation between maturation and germination from a biochemical standpoint. The plant makes a smooth transition from one to the other. Dessication works as a “pause” function, and the plant prepares for this pause by storing mRNA transcripts – along with ribosomal proteins and RNA’s – that will allow for the resumption of development upon imbibition.

Most of the proteins required for the early stages of germination – those we need during malting – are not generated de novo from genomic transcription, but rather are synthesized from stored mRNA’s and ribosomal machinery. Some proteases are present, but more are produced during germination, along with ubiquitin (a universal enzyme cofactor found in all eukaryotes).

But the story these data tell is somewhat different than what is commonly understood; rather than germination being critical for the development of these enzymes, it is the existence of those enzymes (and their precursors) in the resting grain that allows germination to proceed at all.

[OK, DONE WITH SCIENCE]

So what does this mean for malting? Mature, un-germinated barley grains contain all the necessary mRNA transcripts, ribosomal machinery, and endogenous enzymes necessary to start and maintain germination. There is enough beta-amylase present in a mature barley grain to convert its entire starch content without further enzyme release. Why do we even need to malt barley in the first place?

It seems that the most critical stages in early germination are the production of gibberellic acid from stored mRNA, and the increased expression of proteolytic enzymes that degrade the protein matrix of the barley kernel. GA is a hormone that, among other things, removes inhibitors from alpha- and beta-amylases. The degredation of the protein matrix allows access to the starch in the kernel, which is converted by the amylases. Debranching enzymes are synthesized from stored mRNA’s during this time.

So it seems that some sort of time-centered processing is necessary in order to allow stored biochemical machinery to provide the grounds for the conversion of starch to sugar.

Does this have to be our modern method of malting? I don’t believe so. The presence of beta-amylase in those quantities indicates that the most critical need is the exposure of starch via the degredation of the protein matrix. You could accomplish this in other ways; you could, for example, perform an acid digestion of the barley, and then treat it with enzymes to convert the starches to sugars.

So again, why malt? It’s more efficient from an industrial standpoint – the division of labor means that someone else prepares the raw material for use by the brewer, who can then spend an hour mashing it to get the sugar. Alternate processing streams may affect grain flavor, or increase the total labor used to generate a beverage. Malting is a purposefully slow germination process, to allow for very even development of the grain; this ensures maximum yield from a barley harvest.

However, it doesn’t seem like that could be the only way to do it. There may exist an alternate system that allows for the generation of maltose from barley starch – but I’ll leave that for another time.

Drink up!