El Nino, SAM, and sea ice conditions at Palmer

In 2 and a half months, and unless there’s another government shutdown, I’m heading down to Palmer Station to collect a key set of samples for one of my projects.  The idea is to time this sampling effort with the spring diatom bloom.  This bloom is a critical pulse of fixed carbon into the ecosystem after the dark (sub)polar winter, and is followed by a series of blooms by lesser phytoplankton players – cryptophytes, dinoflagellates, and Phaeocystis.  The problem with this plan is that it is pretty much impossible to guess when the spring bloom will happen.  Ecologically speaking it happens as soon as the ice retreats (or rather, as the ice is retreating), but this can vary by many weeks from one season to the next.  The problem is compounded by the sampling strategy at Palmer Station.  Scientists at the station rely on zodiacs for sampling; those ubiquitous inflatable craft that, while surprisingly durable, are pretty useless in even very light ice conditions.

So one thing we would very much like to know is what the ice conditions will be like in mid October when we arrive on station.  A good place to start a discussion of likely ice conditions is, surprisingly, the tropical Pacific.  The tropical Pacific is a mess right now.  There’s a tremendous amount of heat in the surface ocean and cyclones have been pinging around the South Pacific for the last few weeks like a bunch of bumper cars.  This is the result of a strong El Nino taking shape, possibly the strongest we’ve seen in over a decade.

From www.weatherunderground.org. Sea surface temperature anomaly on August 4, 2015. Warm areas indicate sea surface temperatures that are warmer than normal. Notice the large amount of heat in the Pacific, particularly along the equator extending out from Ecuador. This is the result of suppressed upwelling and will probably, among other things, lead to a very poor anchovy catch in Peru this year...

From www.weatherunderground.org. Sea surface temperature anomaly on August 4, 2015. Warm areas indicate sea surface temperatures that are warmer than normal. Notice the large amount of heat in the Pacific, particularly along the equator extending out from Ecuador. This is the result of suppressed upwelling and will probably, among other things, lead to a very poor anchovy catch in Peru this year… Note:  When I published this post I thought I had dropped this image in as a static image, instead it posted as a link.  The (static) image shown is from a few days after this post was published.

The El Nino Southern Oscillation (ENSO) is a tropical phenomenon with global consequences.  One of these is reduced sea ice extent along the West Antarctic Peninsula (WAP).  During an El Nino the polar jet (the analogous to the northern hemisphere’s jet stream) is weakened and there is less transport of heat from the subtropical Pacific to the WAP.  During La Niña the opposite happens; the jet strengthens, driving warm, wet storms south across the Southern Ocean to the WAP.  The strong winds break up the ice and the heavy snow and rain has a pretty bad effect on Adélie penguin chicks, occasionally causing total breeding failures.

ENSO’s not the only pattern of climate variability with an impact on Antarctic sea ice, however.  The Southern Annual Mode (SAM), also called the Antarctic Oscillation (AAO), has a major impact on sea ice extent.  Unlike ENSO, which is the result of complex dynamics between the atmosphere and the ocean, SAM is primarily an atmospheric phenomenon linked to the magnitude of the north/south pressure gradient across the Southern Ocean.  This differential controls the strength of westerly winds that help deliver subtropical heat to the West Antarctic.  SAM has two phases; positive and negative, and can hold one phase for weeks or months, then suddenly shift.  During its negative phase SAM is correlated with reduced westerly winds and increased sea ice along the WAP (and happy penguins).

Right now SAM is in a positive phase, and has been for some time.  But we’ve also got an uber El Nino.  So what does that mean?  I asked Sharon Stammerjohn, a physical oceanographer with the Palmer LTER project, what happens in these situation.  Sharon and several colleagues wrote a paper in 2008 exploring the impact of SAM and ENSO on Antarctic sea ice extent.  It’s pretty clear what happens if a La Niña lines up with a positive SAM (low ice year), or an El Nino coincides with a negative SAM (high ice year).  But what about a positive SAM and a strong El Nino?  In that case it can, apparently, go either way.  Either the strong subtropical storm effect will overcome the weakened polar jet or it won’t.  To get a sense of which is winning so far this year we can take a look at the the NSIDC’s current map of Antarctic sea ice extent.

s_extn

Taken from http://nsidc.org/data/seaice_index/. Antarctic sea ice extent for July 2015. Right now it’s shaping up to be a big ice year for the West Antarctic Peninsula.

Ouch, take a look at the northern tip of the WAP.  The pink line is, as the figure indicates, the median sea ice edge.  Clearly El Nino is winning; most of Antarctica is normalish, but the WAP region has some extraordinary ice cover right now.  Sea ice has been more or less on the decline there for the last couple of decades, it will be very interesting to see what kind of impact this has on the ice-dependent WAP ecosystem.  Of course sea ice that far north is pretty fickle; the SAM could switch modes, or the westerlies could increase, and the sea ice could breakup and move out before spring.  Otherwise it looks like we might be sampling from the Palmer dock…

Posted in Palmer 2015 field season | Leave a comment

BMSIS Undergraduate Essay Contest

Reposted from www.bmsis.org, please share widely!

Mars is on the horizon for future space explorers, with national space agencies as well as private corporations making plans to send humans to the red planet in the coming decades. Meanwhile, remote exploration of Mercury, Venus, and the asteroid belt continues to escalate, which provides insight into the history and resources of the inner solar system. Our planetary neighbors are closer to our reach than ever before, but the long-term exploration of space remains expensive, risky, and uncertain.

The Blue Marble Space Institute of Science invites undergraduate students to address this theme by responding to the question: How can human civilization develop a successful strategy for the sustained exploration of the inner solar system for the next 200 years? The purpose of the essay contest is to stimulate creative thinking that explores how human space exploration will affect humanity’s future.

We invite essays from undergraduate students between the ages of 18 to 30. To be eligible, each applicant must have been enrolled in a degree program at a qualified educational institution (2-year or 4-year college/university) in at least one term during the 2015 calendar year. Applicants should limit their essays to 1500 words or less. The deadline for essay submission is 30 September 2015 at 5:00 PM US Pacific time. Essays will be assessed based on scientific accuracy, originality, and writing style.

The author of the winning essay will receive a $500 prize and will be invited to present his or her ideas in an episode of the “BlueSciCon” podcast series as well as lead a discussion on SAGANet.org. The winning essay will also be considered for publication in a scientific journal or magazine with a commentary by an acclaimed journalist and author. Two honorable mention prizes of $200 each will also be awarded.

Essays will be judged by an expert panel of BMSIS research scientists as well as a group of outside judges of esteemed scientists and writers. Essay winners will be announced on 17 December 2015.

Submit your essay to: essaycontest@bmsis.org

Posted in Uncategorized | Leave a comment

How do common research scales match up across ocean science disciplines?

Apparently not very well.  A couple weeks ago I came across a really cool paper by some current and former students at the University of Washington School of Oceanography and the UW School of Aquatic and Fisheries Science.  The authors are/were participants in the NSF IGERT* funded Intergraduate Program on Ocean Change (iPOC).  The paper, Disciplinary reporting affects the interpretation of climate change impacts in global oceans, was published in Global Change Biology, a high impact journal and no mean feat for a grad-student only publication.  I have some sense of what goes into that sort of project; I was involved in two grad-student led review studies at the UW.  The first, thanks largely to the perseverance of lead author Eva Stueeken, eventually saw the light of day after two years and one rejection.  The second was abandoned after one year, when we actually had a draft in hand and realized that it didn’t go anywhere (or maybe it tried to go everywhere).

In this paper lead author Donna Hauser and colleagues address the question of whether the spatial and temporal scales commonly studied by the different ocean science disciplines are comparable.  They quantified the temporal and spatial extent of 461 top papers split among three biological domains (benthic communities, vertebrates, plankton communities) and a physical/chemical oceanography domain.  That in and of itself reflects some bias, but the authors are biologists (and may therefor be forgiven for the more detailed look at biology), and there are reasons to suspect that biology, as a sub-discipline of the ocean sciences, functions a bit different from chemical and physical oceanography.

Taken from Hauser et al., 2015. The spatial scale of 461 top recent papers across the ocean sciences, by domain.

Taken from Hauser et al., 2015. The spatial scale of 344 out of 461 top recent papers across the ocean sciences that reported a spatial extent.

Not surprisingly the authors found that there are big differences in the spatial and temporal scales favored by these domains.  The above figure captures the spatial differences.  Very few of the biology studies selected by the authors were global in scale, though there were many that were multiregional.  There were also strong regional differences between domains.  Surprisingly there were no benthic studies in the Arctic (with a broad continental shelf the Arctic is known to host a rich benthic ecoystem – hence the walruses and other large bottom feeders), and no physical or chemical studies in the Mediterranean.  Plankton studies were lacking in the Indian (again, surprising; there are huge planktonic ecosystem shifts happening in the Arabian Sea).  A similar pattern is observed across temporal scales, with physical and chemical studies tending to address longer time periods:

From Hauser et al., 2015. The temporal and spatial scale addressed by highly cited ocean change studies from different domains.

From Hauser et al., 2015. The temporal and spatial scale addressed by highly cited ocean change studies from different domains.

 

So why the difference in temporal and spatial scales across the different domains, and is it a problem?  It isn’t that hard to imagine why the spatial scales differ.  Hausser et al. spend some time addressing motivations for undertaking studies at different spatial and temporal scales, but I think a big difference between sub-disciplines is simply in how each collects data.  The bread and butter of physical oceanography, for example, are temperature and salinity measurements.  These measurements are collected en mass  by hydrography cruises and by autonomous sampling platforms (e.g. floats and gliders).  The methodological challenges lie in data analysis and interpretation.  It’s a bit more complicated for chemical oceanography, and the study’s authors didn’t break that sub-discipline into specific domains, but data acquisition is also high volume for chemical oceanographers studying things like pH, dissolved oxygen concentrations, nutrients, and anthropogenic tracers.

Difficult? Yes. Expertly executed? Yes. Necessary? Yes. High throughput? Not a chance. Researchers deploy a zooplankton net off from the ARSV Lawrence Gould off the West Antarctic Peninsula.

Difficult? Yes. Expertly executed? Yes. Necessary? Yes. High throughput? Not a chance. Researchers deploy a zooplankton net from the ARSV Laurence M. Gould off the West Antarctic Peninsula.

For many biological oceanographers data acquisition remains the bottleneck.  The use of satellites to identify ocean color and, more recently, underway flow cytometry to quantify phytoplankton are important advances toward high-throughput sampling, but they remain the exception (and even these advances have major limitations).  Most biologists working in the ocean still 1) haul nets to laboriously count zooplankton and conduct experiments, 2) filter volumes of water for the exceedingly tedious job of extraction and analyzing RNA, DNA, and protein, or 3) actually jump in the water to count things.  None of these approaches are going to get you very many datapoints, even after a lifetime of research.  Thus spatial scales are inherently limited.  It’s a little less obvious why biological studies should be spatially and temporally limited.

Temporal (top) and spatial (bottom) extent of included studies for different topical areas.

Temporal (top) and spatial (bottom) extent of included studies for different topical areas.

The figure above breaks things down even further into topical areas and suggests to me that the temporal constraint, as for spatial scales, is methodological.  The techniques commonly applied to the study of archaeoplankton and bacterioplankton, for which a cutting-edge study might evaluate change across just a season, have really only been around since 2005 (for a very cool exception, or rather work-around, check out this paper).  And the techniques have evolved so rapidly since then that the 2005 stuff is hardly comparably to what’s sampled today.  This is compounded by some institutional laziness among biologists regarding record keeping, and inertia regarding the adoption of standards and high-quality databases.  The situation gets better as you move up the food chain toward topics that rely on more classic, records-based ecology.

So is this a problem?  Well, the authors probably wouldn’t have gotten published in Global Change Biology if it wasn’t.  There are (at least) two reasons this is a problem.  The first, which is really the thrust of the paper, is that you can’t integrate across disciplines and develop a comprehensive understanding of ocean change if everyone is studying things at different scales.  This isn’t just a scolding to biologists to upscale their studies spatially and temporally.  As the last line of the abstract notes, there is a need to measure biological responses at biologically relevant timescales.  Often the biologically relevant scale is just different from the scale that attracts expertise and funding to physical and chemical oceanography.  Biologists do need to break though their malaise and bottlenecks and generate analysis at broader temporal and spatial scales, but the funding agencies might also need to think creatively about motivating collaborative work at the most relevant scales.  That scale might not be global, and it might not be decadal, even if that’s the surest way to get papers published in Science and Nature.  The second reason this is a problem is a subset of the first, and has to do with the lack of historical records in biological oceanography, something that we aren’t in any hurry to fix.  We have the technology to conduct spatial and temporal analyses of microbial community structure and function at a moderate resolution, for example, but very few people are trying to implement those analyses into existing continuous ocean sampling programs.  At some point that’s going to have to change.

*In classic fashion, and following the GK-12 program, IGERT become popular and successful, and so was cut by NSF.  Funding will continue for existing IGERT programs until the end of their funding period, but there will not be new calls for IGERT programs.  As a graduate student I benefited from 3 years of IGERT funding through the UW Astrobiology Program, which not only paid my stipend but provided some small funds for research and travel.  IGERT was a unique and highly effective program (as was GK-12, and EPA STAR, and…) and I’m sorry to see it go.

Posted in Research | 1 Comment

US science funding since 1963

One way to get some great data on US science funding is to sign up for the National Science Foundation’s email notifications.  In addition to announcements on funding opportunities you get notifications on the publication of a wide range of funding reports.  I received one this morning titled “Federal Science and Engineering Support to Universities, Colleges, and Nonprofit Institutions: FY2013“.  It caught my eye because it ties in with this recent post on the balance between public and private R&D funding.

The best thing about this report is not that it provides R&D spending (as allocated to entities that are not federal agencies) for 2013, but that it provides a breakdown for all years since 1963.  It’s a great opportunity to get a little historical perspective on federal science spending.  Here’s a plot of federal R&D dollars since 1963.  Federal science spending seems to increase since 1963, reaching a maximum in 2009 (the “stimulus” year).

Federal R&D spending since 1963.

Federal R&D spending since 1963.

Of course the dollar has been decreasing in value since 1963 due to inflation. What does the curve look like if this is taken into account?  To say that I’m no economist is a profound understatement, but I found a handy calculator for estimating the buying power of the dollar for a given year on the website of the Bureau of Labor and Statistics.  In 1963, for example, the dollar was worth $7.61 in 2013 dollars.  A third degree polynomial fits to the values for 1963, 1973, 1983, 1993, 2003, and 2013 pretty well (yes it would have been entirely realistic to look up the values for each year, but why do that when you can fit a curve?):

Dollars in 2013 dollars.

Dollars in 2013 dollars.

It looks like inflation was pretty steep in the 1960s and 70s and has really slowed since 2000.  I listen to Marketplace more days than not, so I feel like I should have a better understanding of why that is…

Taking the purchasing power of the dollar into account we get a slightly different view of federal R&D spending since 1963.

Federal R&D spending since 1963, in 2013 dollars.

Federal R&D spending since 1963, in 2013 dollars.

This surprised me a bit.  It suggests that in 1963, at the start of the space race, the federal government was investing roughly 10 billion (2013) dollars per year in R&D.  There were some downs, but overall that amount increased dramatically to just over 40 billion dollars in 2009, followed by a precipitous decrease to somewhere in the neighborhood of 34 billion dollars.  What this plot doesn’t show, however, is the breakdown by agency.  I didn’t make a plot showing this, but let’s consider allocations by Health and Human Services (almost all through NIH) and the National Science Foundation (NSF).  In 1963 NSF allocated 256.3 million dollars to colleges, universities, institutes, etc.  That’s 1.95 billion in 2013 dollars.  In 2013 it allocated 4.91 billion, a 2.51 fold increase.  Health and Human Services (i.e. NIH), by contrast, allocated 4.42 billion in 2013 dollars in 1963.  In 2013 it allocated 16.00 billion dollars, a 3.61 fold increase.

Depending on who you are, that may or may not have any emotional impact.  First, it would seem that science funding is more or less increasing, except for the last few years.  Second, the increase in basic research (NSF allocations) might be lagging behind the increase in medical research (Health and Human Services), but it’s still increasing.

Considering the first point first.  I wanted to consider the increases and decreases in the context of GDP which, despite turbulent times in every decade since the 60’s (the oil crisis, the stock market crash, the dot-com bubble, 9/11, the great recession – is that a thing yet?, to name a few), has increased in every year except 2009.  There was a minor hiccup in 2009 but things recovered rapidly.  Considering federal R&D spending as a percent of GDP reveals a pretty surprising trend…

Federal R&D spending as a percent of GDP. Just for fun I've colored the timeline according to what party had control of what. Top colors are the Whitehouse, middle is the House, bottom is the Senate. Draw your own conclusions.

Federal R&D spending as a percent of GDP. Just for fun I’ve colored the timeline according to what party had control of what. Top colors are the White House, middle is the House of Reps, bottom is the Senate. Draw your own conclusions.

…there really is no trend!  Things have wobbled around a mean value of 0.20 %.  We’re actually down quite a bit since 1963, somewhere around 0.17 or 0.18 % of GDP.  What’s the take away?  From personal experience I know what 2009 felt like, when the nation invested 0.25 % of GDP on public sector R&D, and what 0.17 % felt like (read; feels like).  That 0.08 % makes a really big difference in scientific advancement, jobs, education and quality of life.  I think 0.25 % is a good goal to strive for.

To return very briefly to point #2 raised earlier, with respect to the 3.61 fold increase in allocations by Health and Human Services.  In no way do I disparage that increase in funding for my colleagues underneath the NIH funding umbrella.  They work extremely hard for limited funds, and are making huge strides in solving tough and essential medical problems.  I think it’s a mistake, however, to assume that we can achieve a strong and healthy society primarily through investment in the kind of research that NIH is focused on.  From a societal perspective NIH is focused on treating the symptoms of society’s ills; groundwater contamination or UV exposure gives you cancer, NIH funded research provides you a medical solution (hopefully).  The other federal agencies allocate funds to research that addresses the core causes, somewhere far upstream of NIH.  This is much less sexy, and much less appealing to congress, but arguably more important.  With the ozone layer recovered, hydrology better understood, and better alternatives to some nasty chemicals, for example, perhaps fewer people would need what NIH can provide.  Again, that’s no argument against medical research, just a request that we acknowledge that it isn’t the whole solution.  Maybe we can achieve that 3.61 fold increase for all R&D spending.

Posted in Uncategorized | 1 Comment

Making friends in the DOC pool

I’m going to keep draining the pool analogy until it’s dry…

I’m going to miss the biological oceanography journal club at LDEO tomorrow in order to attend the Sequencing the Urban Genome symposium at the NY Academy of Sciences.  It’s a step away from my primary field, but I’ve been to some human/built environment microbiome symposia before and found them to be fascinating.  All microbial ecology gets at the same fundamental set of questions, even as the environment being studied changes dramatically, so it is often useful to see how a different group of researchers are addressing the same problems that we are.  I was really looking forward to our paper discussion (to be led by Kyle Frischkorn), however, so to make up for it I decided to do a quick write up.

This week’s paper comes out of Ginger Armbrust’s lab at the University of Washington.  I was fortunate to know lead author Shady Amin while he was a postdoc with Ginger, he’s now an assistant professor at NYU’s Dubai campus.   The paper, Interaction and signalling between a cosmopolitan phytoplankton and associated bacteria, is another great example of what can happen when marine chemists team up with microbial geneticists, something that’s been happening a lot lately.  In this case they were able to uncover a two-way communication pathway between a marine diatom and a bacterial affiliate, based on a well-studied mechanism of communication between plants and bacteria in the rhizosphere.  We’ve known about plant-bacteria communication for a long time, but it was not obvious that a method of communication that works in the solid soil matrix (containing upwards of a billion bacteria per gram) would work in seawater (being liquid and containing only a hundred thousand to a million bacteria per ml).  So we’re several decades behind the plant people in understanding the directed influence marine bacteria can have on phytoplankton.  But moving fast.

nature14488-f2

Taken from Amin et al. 2015.

As shown in the figure above, the specific mechanism of communication reported by Amin et al. is based on the plant hormone indole-3-acetic acid (IAA).  The bacterium, a Sulfitobacter, produces IAA, which kicks the diatom into overdrive, upregulating genes involved in cell division and carbon fixation.  In turn the diatom releases the IAA precursor tryptophan, taurine, and DMSP, the latter two compounds serving as a carbon source for the bacterium.

What is really remarkable about this is that the relationship is highly specific.  The diatom was tested with many different bacteria isolated from diatom cultures, and the bacterium was grown in the presence of many different phytoplankton isolates.  Only for this specific pairing, however, did the diatom and bacteria both have a positive influence on the growth of the other.

To take the study to the next level Amin et al. measured IAA abundance in surface seawater in the North Pacific and looked for evidence of IAA biosynthesis pathways in several public metatranscriptomes.  Not surprisingly they found plenty of both; measurable levels of IAA in all samples and what seems to me to be a reasonably high number of transcripts associated with IAA pathways (not sure what 107 transcripts L-1 actually means though…).

This was a really cool study, but I’m having a hard time wrapping my head around the implications.  The specificity implied in the experiments contrasts sharply with the abundance of IAA and IAA biosynthesis pathways in the natural environment.  And if the communication pathway provides a major benefit to the bacterium and/or the diatom, why is it rare among laboratory isolates?  What is the cost to both the bacterium and diatom?  For example, diatom respiration is suppressed by IAA, is maintenance deferred in favor of growth?

Posted in Research | Leave a comment

Science (de)funding

Note: I had intended this post to be a quick discussion of how science funding supports jobs, and why we need more science funding at the federal level.  As you can see below, I never quite got there, but I hope this entry has a couple of points of interest.

####

I don’t write about politics very often on this blog.  I follow the news best I can but I’m not what you’d call an astute political observer.  Given the large number of people who are, or who at least pretend to be, I usually feel there isn’t that much that I can add to the conversation.  I think that this is a sentiment that is shared by a lot of my peers.  Like most scientists I prefer to keep politics at arm’s length (maybe at an arm + leg’s length) and spend my time and energy doing science.

This is, of course, totally unrealistic.  Although there is a growing philanthropic contribution to science funding – a much welcome contribution that, nonetheless, raises its own set of issues – the bulk of funding for basic research in the United States comes from the federal government, and that’s become a real problem.  Looking for data on this I found some interesting numbers on R&D spending as a fraction of GDP compiled by the World Bank.  The US is not the largest spender, falling behind S. Korea, Israel, and much of northern Europe, but it re-invests a substantial proportion (2.7 %) of its GDP in R&D, an amount consistent with most of the developing world.

R&D spending as a percent of GDP for the top 15 nations for the period 2005-2012.  Data taken from http://wdi.worldbank.org/table/5.13#.

R&D spending as a percent of GDP for the top 15 nations for the period 2005-2012. Data taken from http://wdi.worldbank.org/table/5.13#.

The problem, however, is that the vast majority of this funding is being committed by and spent in the private sector.  Nothing wrong with private sector spending on R&D, of course, but the private sector has a very different mission and motivation than the public sector.  Their goal is to maximize profit in the near term by 1) developing new products 2) lowering the cost to produce existing products and 3) expanding the market for new and existing products.  Research that supports these objectives is considered applied research.  By contrast federal funding is largely responsible for sustaining basic research.  Basic research is rarely undirected – the peer review process in place at federal agencies is, if anything, overly conservative on this – but it is often undertaken without a specific commercial application in mind (though there are plenty of exceptions, the search for novel antibiotic compounds being one that comes immediately to mind).

I think that most industry leaders recognize that basic research is a critical feedstock of industry (I suspect that all devices, algorithms, and processes have some ancestry in basic research).  There may however, be a growing sense that even applied research does not belong entirely in the private sector, or at least that the companies best known for a high level of R&D spending have not always been the top performers** (whatever happened to the Bell telephone company anyway?).  I tend to agree with this idea.  I like to think of public sector science as a common well, where all companies, innovators, and entrepreneurs can come for new ideas and solutions.  The well is sustained by the federal government with the money that everyone pays into the system.

So how does the depth of the public well stack up against private sector R&D spending?  The 2014 US GDP was 16.77 trillion dollars; 2.7 % of that amount is 452 billion dollars.  The chart below shows federal research spending by government agency, tally it all up and you get 68.1 billion dollars, or roughly 15 % of total US R&D spending.  Note that not all of this funding can be described as basic research, in fact the only agencies with a real mandate for basic research are NASA and NSF.  More on that momentarily.  Before we get to the point I want to quickly address the little blip in funding for 2009.  That was my second year in grad school and I remember all the hubbub about stimulus spending to “jump start” the economy (sort of like wiring a D-cell battery to your dead car and hoping for the best) – lots of people got grants funded that year.  This increase in funding produced some good science, but in the long run I fear that its done as much damage as it did good.  The temporary increase in funding provided for many graduate students, postdoctoral researchers, and staff members.  Because funding levels were not sustained, however, these positions could not be maintained.  What happened to all these people is a great question that nobody seems particularly keen to explore.  It seems unlikely that they were absorbed into permanent jobs in academia; hiring was low during the recession and has been tepid during the recovery.  Many of these workers probably moved on to the private sector, but it also seems like there are a lot more “soft money”* scientists these days.

US federal science funding by agency.  Taken from The 2014 Budget: A World Leading Commitment to Science and Research.

US federal science funding by agency. Taken from The 2014 Budget: A World Leading Commitment to Science and Research.

To finally get to the point.  Recent weeks have seen a disturbing escalation in the rhetoric against basic science in the House of Representatives.  At the core is the re-authorization of the America COMPETES Act, elements of which will pretty much insure that America does anything but.  There are a few troubling things about this bill, which have been widely discussed in other blogs and the mainstream media, including an under-the-table redistribution of funds between NSF directorates and sly spending level freezes.  The goal was apparently to enact a deep cut to the Geosciences Directorate (one of the seven major organizational units within NSF) without making it obvious.  The bill was accompanied by some rather silly statements by John Culberson (R-Tx), chair of the House Commerce, Justice, and Science subcommittee, regarding the role of the Geosciences Directorate within NSF, and Senator Ted Cruz (also R-Tx, imagine that!) on whether the geosciences are a “hard” science.  It isn’t hard to imagine why Republican’s want to take aim at the Geosciences Directorate, which funds a lot of climate science work (they are also taking aim at Earth observing programs within NASA).  It’s enough to make one wish for the Bush years, when science funding saw steady, albeit slow, growth, and funding for climate-critical projects like the Orbiting Carbon Observatory.  The comments by Ted Cruz however, are baffling, as the Geosciences are among the most quantitatively rigorous disciplines of modern science.  I can only hope that enough petroleum geologists and engineers back in Texas were offended by the remark to earn Ted a jeer or two in his home state.

On a practical level I suspect that most Republican members of Congress, like their Democratic counterparts, have no idea what they are talking about when they speak about science, the scientific community, and science funding.  I’ve had the pleasure of having some conversations with Democrat science staffers in both houses and I was shocked at how little they new about what science funding is actually used for.  This point was brought home in a recent editorial in Science, where NSF director France Córdova described the surprise expressed by a Congressional delegation upon learning that the entire Antarctic Program is funded through the Geosciences Directorate.  It is, sadly, the prerogative of the Congressional leadership to execute a personal vendetta against the climate science community, but we can expect a lot of collateral damage, including to the industry partners of these same Republicans.

In addition to the Antarctic and Arctic Programs, a quick look at the Directorate of Geosciences website reveals programs for:

Tectonics!  Probably best not to defund earthquake research…

Hydrology!  I keep hearing stuff about a drought or something, but it’s probably nothing…

Biology!  Back in the day biology was considered evil.  But now it has John Culberson’s full support.  Hey geosciences, thanks for getting us off the s–t list!

Fellowships!  When the Republicans talk about jobs, they don’t mean jobs for those people.  If industry wants highly skilled geologists and informaticians they can find them in some other country…

This wandered pretty far from where I started, so the take home points are:

1.  In the US, industry spends a lot of money on R&D.  It would be better for everyone, and more economical, if the federal government took on more of this responsibility.

2.  The federal government allocates a very small portion of the federal budget to NSF and NASA, the two agencies that conduct the most basic research.

3.  These same two agencies are being raked over the coals for producing high quality science that the congressional leadership objects to.

4.  You can do something about this.  Please let your Senator know you object to the House cuts for science, and that the Senate should not follow suit.  No one will read your email, but the volume of email traffic matters for something.  The American Geophysical Union, the largest geosciences professional organization in the US, has an easy web interface for emailing your senators.  They’ll even provide some boilerplate text, I added the following to my message:

As a working research biologist I’m deeply troubled by the attacks against my colleagues working in Earth and space sciences, of which this budget issue is only the most recent iteration.  From previous conversations with staffers in both houses I feel there is an under appreciation of how federal science spending supports higher education and jobs across the United States.  University faculty, and faculty at non-profit research centers, are typically dependent on federal grants for 25 to 50 % of their salary.  Lab managers, technicians, administrative assistants, graduate students, and researchers not on a tenure track are typically entirely dependent on federal grants (i.e. they receive no institutional support).  The proposed funding cuts are an academic job killing program that the country can ill afford – many of these positions are essential training for future academics and future industry researchers.  I encourage you to not only oppose this measure, but to adopt language that accurately reflects the possible ramifications.

* There are two types of science positions in academia.  Hard money positions, such as a typical university professorship, come with at least partial institutional support, usually for 6 or 9 months of salary each year.  The researcher is expected to raise the rest of their salary, and all their research expenses, from grants.  A researcher in a soft money position has a lab and position at an institution, but has no (or little) salary support.  This is great for universities, which pay almost nothing for soft money positions (facilities costs are offset by grant overhead) but benefit equally from the products of soft money and hard money research.

** I’m not supporting the assertion this article makes, which even at a quick glance is based on shoddy methods, but the idea might have merit.

Posted in Uncategorized | Leave a comment

Predicting metabolic pathways in a metagenome

For an ongoing project I needed to predict the metabolic pathways that are present in a metagenome.  This is actually something that I’ve been interested in trying to do for a while, as metabolic pathways can tell us much more about metabolic function than genes alone.  There may be some pre-packaged methods out there for doing this, but I’ve struggled over the years to find one that doesn’t require commercial software or a subscription to a commercial database (i.e. KEGG).

I’ve worked out the following method that relies on DIAMOND, Biopython, and Pathway-Tools.  All are available for free for academic users.  I went down this path at the request of a reviewer for a submitted manuscript, if you use the method please cite the (hopefully) forthcoming work – Microbial communities can be described by metabolic structure: A general framework and application to a seasonally variable, depth-stratified microbial community from the coastal West Antarctic Peninsula.  I haven’t archived the work on a pre-press server (I’m still not sure how I feel about those), but I’m happy to provide a copy of the manuscript and/or the citation.  I’ll try to remember to add the citation here when (if!) the revised manuscript is accepted (Accepted! see link in this article).

The general strategy here is to:

1.  Shake your fist at the hard, cruel world that forced KEGG to go commercial.

2.  Create a DIAMOND database of all the coding sequences present in all completed Genbank genomes.

3.  Use DIAMOND blastx to search all the metagenomic reads against this database.

4.  Create an artificial Genbank format sequence record containing all the features that had a hit.  To avoid time and memory constraints this record will contain only unique BRENDA EC identifiers and gene products.

5.  Use pathway tools to predict the metabolic pathways in this artificial “genome”.

A word of warning… I hacked this together in a non-linear fashion, thus the code presented here is NOT exactly what I ran.  There might be some errors, so if you try this and it doesn’t run just take a deep breath and track down the errors.  Feel free to ping me if you can’t find the problem.  Okay, here we go!

Download the complete genomes from Genbank and go get a fresh cup of coffee:

wget -r -A *.gbk -nc ftp://ftp.ncbi.nlm.nih.gov/genbank/genomes/Bacteria/

Next, extract all the feature translations from these Genbank files (yes, you could just download the faa files, but I had reasons for needing to do it this way).  Pay attention to directory locations (here and in the other scripts)!

### user setable variables ###

ref_dir = '/volumes/hd1/ref_genome_database/ftp.ncbi.nlm.nih.gov/genbank/genomes/Bacteria/' # location of the database directory

### end user setable variables ###

import os  
from Bio import SeqIO  

with open('all_genomes.fasta', 'w') as fasta_out, open('all_genomes_map.txt', 'w') as map_out:    
    for d in os.listdir(ref_dir):    
        for f in os.listdir(ref_dir + d):
            if f.endswith('.gbk'):                
                gbk_name = f
                
                for record in SeqIO.parse(ref_dir + d + '/' + gbk_name, 'genbank'):
                    parent = record.seq
                    
                    for feature in record.features:
                        if feature.type == 'CDS':
                        
                            start = int(feature.location.start)
                            end = int(feature.location.end)
                            strand = feature.location.strand
                            try:
                                trans = feature.qualifiers['translation'][0]    
                                feature_id = (start, end, strand)
                                
                                if strand == -1:
                                    sequence = parent[start:end].reverse_complement()
                                else:
                                    sequence = parent[start:end]
                                
                                seqname = record.id + '_' + str(start) + '_' + str(end) + '_' + str(strand)
                                print start, end, sequence[0:10]
                                print >> fasta_out, '>' + seqname + '\n' + trans
                                print >> map_out, d, f, seqname
    
                            except KeyError:
                                continue

Make a DIAMOND database:

diamond makedb --in all_genomes.fasta -d all_genomes_diamond

And run DIAMOND blastx (so fast!).  This took me about an hour with 24 cpus and 32 Gb RAM for a metagenome with 22 million reads…

diamond blastx -d all_genomes_diamond -q test_mg.good.fasta -o test_mg.good.diamond.txt -k 1

Now create the Genbank file containing all features that have a hit.

ref_dir = 'ftp.ncbi.nlm.nih.gov/genbank/genomes/Bacteria/'

import os
from Bio import SeqIO
from Bio.SeqRecord import SeqRecord
from Bio.Seq import Seq
from Bio.Alphabet import IUPAC

map_dict = {}

with open('all_genomes_map.txt', 'r') as map_in:
    for line in map_in:
        line = line.rstrip()
        line = line.split()
        map_dict[line[2]] = line[0]

hits = set()
genomes = set()

with open('test_mg.good.diamond.txt', 'r') as diamond:
    for line in diamond:
        line = line.rstrip()
        line = line.split()
        evalue = float(line[10])
        hit = line[1]
        if evalue < 1e-5:
            hits.add(hit)
            print 'hit =', hit
            genome = map_dict[hit]
            genomes.add(genome)

with open('all_genomes_nr/genetic-elements.dat', 'w') as all_genetic_elements:
    print >> all_genetic_elements, 'ID' + '\t' + 'all_genomes_nr'
    print >> all_genetic_elements, 'NAME' + '\t' + 'all_genomes_nr'
    print >> all_genetic_elements, 'TYPE' + '\t' + ':CHRSM'
    print >> all_genetic_elements, 'CIRCULAR?' + '\t' + 'N'
    print >> all_genetic_elements, 'ANNOT-FILE' + '\t' + 'all_genomes_nr.gbk'
    print >> all_genetic_elements, '//'  
    
with open('all_genomes_nr/organism-params.dat', 'w') as all_organism_params:
    print >> all_organism_params, 'ID' + '\t' + 'all_genomes'
    print >> all_organism_params, 'Storage' + '\t' + 'File'
    print >> all_organism_params, 'Name' + '\t' + 'all_genomes'
    print >> all_organism_params, 'Rank' + '\t' + 'Strain'
    print >> all_organism_params, 'Domain' + '\t' + 'TAX-2'
    print >> all_organism_params, 'Create?' + '\t' + 't'

good_features = []
ec = set()
products = set()
    
for d in os.listdir(ref_dir):
    if d in genomes:
        for f in os.listdir(ref_dir + d):        
            for record in SeqIO.parse(ref_dir + d + '/' + f, 'genbank'):            
                for feature in record.features:
                    keep = False
                    if feature.type == 'CDS':
                        try:
                            temp_ec = feature.qualifiers['EC_number']
                            for each in temp_ec:
                                if each not in ec:
                                    keep = True
                                    ec.add(each)
                                    print feature.type, each
                            if keep == True:
                                good_features.append(feature)
                        except KeyError:
                            try:
                                temp_product = feature.qualifiers['product'][0]
                                if temp_product not in products:
                                    good_features.append(feature)
                                    products.add(temp_product)
                                    print feature.type, temp_product
                            except KeyError:
                                continue
            

new_record = SeqRecord(Seq('nnnn', alphabet = IUPAC.ambiguous_dna), id = 'all_genomes_nr', name = 'all_genomes_nr', features = good_features) 
SeqIO.write(new_record, open('all_genomes_nr/all_genomes_nr.gbk', 'w'), 'genbank')

Once you’ve waded your way through that mess you can run the Pathos program in Pathway-Tools from the command line as such:

pathway-tools -lisp -patho all_genomes_nr/ -disable-metadata-saving &> pathos_all_genomes_nr.log

That will take a little while to run.  When it’s complete you can navigate to where Pathway-Tools store your user-generated pathways (probably something like home/you/ptools-local/pgdbs/user).  The pathways will be in the all_genomes_nycyc directory.  The best way to get at that information is to parse the friendly pathway-reports.txt file.

Update 6/5/15 – I was stuck in file parsing mode when I wrote that last paragraph.  Clearly the best way to “get at that information” is to fire up Pathway-Tools, which will let you view all the pathways by hierarchy, and their associated graphics.

Posted in Research | 10 Comments

Trick question: Is the oligotrophic ocean net autotrophic or heterotrophic?

Answer: Both!

For this week’s LDEO biological oceanography journal club  we discussed a very interesting paper recently published in Nature Communications by Pablo Serret et al., titled Both respiration and photosynthesis determine the scaling of plankon metabolism in the oligotrophic ocean.  The paper was suggested by graduate student Hyewon Kim who did an excellent job leading the discussion.

Given the scale of the global ocean, whether it serves as an overall source or sink of CO2 is a big question, with major implications for the flux of CO2 to the atmosphere.  The primary processes controlling this are photosynthesis (autotrophy), a sink for CO2, and respiration (heterotrophy), a source of CO2.  The difference between these (photosynthesis – respiration) is called net community production (NCP).  When NCP is positive, CO2 is fluxing into the surface ocean, when negative, CO2 is fluxing out.  The dynamics that determine exactly how much photosynthesis and respiration is going on are extremely complex and our understanding of both is definitely incomplete.  Clearly light, temperature, and nutrients have something to do with the efficiency of these processes, but so does community composition (controlled in turn by light, nutrients, water column structure), grazing pressure, and a host of other variables.  Predicting the sign of NCP mechanistically, although highly desirable, is beyond our current abilities.  In place of this we rely on estimates of NCP from direct measurements of photosynthesis and respiration.

It is reasonable to think that direct measurements should be better than a mechanistic model anyway, however, without a model there is no way to confirm that the measurements are actually right.  This is particularly important for NCP estimates, as the measurements of both photosynthesis and respiration are all over the board and are known to be fundamentally flawed, as shown in this figure from Williams et al., 2013.

ma50535.f2As categorized in the above figure, measurements of NCP come in two flavors, in vitro and in situIn vitro measurements are typically bottle incubations, conducted on the deck of a ship in a water bath (to maintain in situ temperature) under both light and dark conditions.  The amount of oxygen consumed in the dark provides an estimate of nighttime respiration.  The amount of oxygen produced in the light (slightly shaded to represent in situ light conditions) provides an estimate of daytime NCP.  Subtracting nighttime respiration from daytime NCP gives an estimate of NCP over an entire 24 hour period.  There is lot, however, that can go wrong with this.

One of the biggest issues is “bottle effect”, the influence the incubation conditions have on the microbial community.  Biological oceanographers have known about “bottle effects” for a long time.  We try to minimize these effects, for example by acid washing glassware to eliminate trace metals, but there is no way that a bottle incubation will ever accurately reflect what happens in the ocean.  Even slightly inhibiting or aiding photosynthesis or respiration in these incubations can tip the NCP balance in one direction of the other.

In situ observations are probably more accurate  – at least they look a lot cleaner in the Williams et al. figure – but they have their own set of problems.  In situ values are calculated from the concentration of O2 and argon (Ar) in the water.  Ar is an inert gas and helps constrain the amount of O2 delivered to the water by physical processes.  It is non-trivial (and expensive) to make accurate measurements of these gases in seawater, and the estimate of NCP can be thrown off by unaccounted for physical processes in the water column.

From Serret et al. 2015.  Blue line/closed squares are for the North Atlantic subtropical gyre.  Red line/open squares are for the South Atlantic subtropical gyre.

From Serret et al. 2015. Blue line/closed squares are for the North Atlantic subtropical gyre. Red line/open squares are for the South Atlantic subtropical gyre. Values above the 1:1 line indicate a negative NCP, values below indicate positive NCP.

Back to the paper.  In this paper Serret et al. undertook perhaps the most comprehensive in vitro analysis of NCP to date on a series of cruises following a north/south transect in the Atlantic Ocean.  In addition to the broad geographical area, what’s special about the dataset they collected is not that it’s necessarily right – they are still relying on bottle incubations, after all – but, because the data was collected by the same personnel using the same methods, it might at least be consistently wrong.  That’s a bigger improvement than it sounds.  Previously NCP estimates have assumed that the subtropical gyres (the vast, low productivity regions in the center of each ocean basin) are more or less the same.  Serret et al. conclusively demonstrated that the North Atlantic and South Atlantic gyres have different NCP values; negative in the north (more respiration than photosynthesis) and positive in the south (more photosynthesis than respiration).

In this day and age the idea that the ocean is the same all over is pretty quaint.  We understand that there is tremendous spatial and temporal variability, even when we average across pretty large areas and timescales.  Direct observations of this variability are lacking however, given the size of the ocean and the limited number of oceanographic cruises tasked to studying it.  There are some issues with this study in terms of possible seasonal and geographic bias (one member of our discussion group noted that the northern cruise track veers strongly east), and sample size, but it’s a tremendous improvement over what’s out there.

Posted in Research | Leave a comment

Adult swim – Old carbon in the DOC pool

We’re starting a summer biological oceanography journal club at LDEO and the inaugural paper is by Lechtenfield et al. (2015), published just a couple of months ago in Nature Communications.  Titled Marine sequestration of carbon in bacterial metabolites, the paper is one of several very interesting articles to come out recently on the nature of marine dissolved organic carbon (DOC).  Marine chemists are on the cusp of something akin to where microbial ecologists were in 2005, when major advances in sequence technology suddenly opened up a world of microbial diversity that hadn’t really been anticipated.  Ten years later we’re still grappling with the paradigm shift, in particular with the question of how necessary understanding all this diversity is to understanding the function of the microbial ecosystem.  Interestingly, because microbial enzymes are involved in the modification of DOC components, the question of DOC chemical diversity and microbial diversity are fundamentally intertwined.

Why all the fuss about DOC?  The ocean’s a big place and contains a shocking amount of carbon – around 38,000 gigatons (or pentagrams, if you prefer).  To put that into perspective consider that the atmosphere contains only about 750 Gt.  Given the size of this reservoir, and the sensitivity of the climate system to atmospheric carbon, the amount of carbon going into and coming out of the ocean keeps a number of people up at night.  Most of this carbon is inorganic; CO2 and it’s various sister species.  Although this inorganic pool is influenced by biology through photosynthesis and respiration, it is primarily controlled by rather dull and predictable geological cycles*.  A smaller but still very significant amount, roughly equivalent to the amount of carbon in the atmosphere, is DOC.  DOC is interesting because it can be cycled very quickly or very slowly, depending on its chemical structure.  Structures that are easily broken down by microbial enzymes are called labile.  Those that are not are called refractory.

Until recently marine chemists exploring the nature of the DOC pool primarily undertook targeted studies of specific DOC components.  A researcher, for example, might choose to look at a certain class of lipids.  Chemical properties of the target compound class, such as solubility in different solvents, would be exploited to isolate the compounds from seawater, and the compounds would then be analyzed by mass spectrometry and other methods.  Such methods require very precise a priori knowledge of the compounds of interest – a problem for the highly complex and poorly characterized marine DOC pool.

Lechtenfeld et al. used solid phase extraction (SPE) in combination with mass spectrometry, and, more importantly, nuclear magnetic resonance spectroscopy (NMR) to undertake a broad-spectrum characterization of the DOC pool.  By combining these methods they were able to consider both compositional diversity (the various combinations of CHNOP and S that make up organic matter) and structural diversity (the actual arrangements of these elements into meaningful functional groups), without a priori knowledge of what they were looking for.

Using these methods they analyzed the DOC pool produced by marine microbes grown on simple and well defined carbon sources along with the DOC pool of natural seawater.  What they found was, first of all, that the DOC pool is enormously complex, as illustrated in the picture below.  The figure draws heavily on this incredibly detailed work.  I spent a little time with it and I’m still not sure exactly how to interpret the figure – but my best guess is that each point represents a unique molecular configuration, and that (as indicated on the plot) certain regions are diagnostic of specific functional groups.   The plot is in fact a 2D representation of the NMR data, so it helps to think of it in terms of the more familiar independent 1D representations before trying to put it all together.

Pages from Marine sequestration of carbon in bacterial metabolites-3

2D NMR spectrogram from Lechtenfeld et al. 2015.

In this figure the left two frames are DOC from the culture experiments while the right is from near-surface seawater.  While the seawater is clearly more complex, the cultures produced an astonishingly diversity of chemical compounds in a very short amount of time.  Interestingly, roughly %30 of the molecules they produced belong to a class of molecules called carboxyl-rich alicyclic molecules (CRAM).  One example CRAM are the hopanoids, pictured below sans carboxyl-rich side chain.

From Wikipedia at http://en.wikipedia.org/wiki/Hopanoids#/media/File:Hopanoid_01.png.

From Wikipedia at http://en.wikipedia.org/wiki/Hopanoids#/media/File:Hopanoid_01.png.

Due to their cyclic nature CRAM and related molecules can be very resistant to microbial degradation.  I went to a very cool talk recently where it was proposed that these compounds are most readily degraded when ocean water cycles through the aquifer on hydrothermal ridge flanks.  That happens more than one might think, but your average CRAM still has a long wait before its particular packet of water happens to cycle through the aquifer.

So the point is that marine bacteria are efficient little factories for converting simple, labile organic compounds into complex, refractory ones.  This increases the average age of the DOC pool and the amount of photosynthetically fixed carbon that can be sequestered in the deep ocean and sediment.  Why and how, exactly, they do this is an open question, as is exactly how refractory is refractory?

 

 

*I know you geochemists have a sense of humor.

Posted in Research | Leave a comment

Protein flexibility calculation with Python

In a couple chapters of my dissertation I made use of protein flexibility values calculated with the ProtParam module in Biopython (Bio.SeqUtils.ProtParam).  The flexibility function makes use of amino acid β-factors as reported by Vihinen et al. (1994).  We were called out in a recent review for (among other things) not using more up-to-date β-factors, particularly those reported in Smith et al. (2003).  I’m sure there’s some way to wrench open the hood and modify the ProtParam function so that it uses the newer β-factors, but that doesn’t sound like much fun.  Instead I generated a new function that takes any protein sequence (as a string) and returns a list of flexibilities, smoothed with a 9 residue window.  I knocked this up quick, so test before you use 🙂

def flexcalc(prot):
    b_factor = {'A': '0.717', 'C': '0.668', 'E': '0.963', 'D': '0.921', 'G': '0.843', 'F': '0.599', 'I': '0.632', 'H': '0.754', 'K': '0.912', 'M': '0.685', 'L': '0.681', 'N': '0.851', 'Q': '0.849', 'P': '0.85', 'S': '0.84', 'R': '0.814', 'T': '0.758', 'W': '0.626', 'V': '0.619', 'Y': '0.615'}
    b_factor_rr = {'A': '0.556', 'C': '0.607', 'E': '0.805', 'D': '0.726', 'G': '0.651', 'F': '0.465', 'I': '0.51', 'H': '0.597', 'K': '0.863', 'M': '0.575', 'L': '0.504', 'N': '0.735', 'Q': '0.729', 'P': '0.752', 'S': '0.698', 'R': '0.676', 'T': '0.648', 'W': '0.577', 'V': '0.503', 'Y': '0.461'}
    b_factor_rf = {'A': '0.704', 'C': '0.671', 'E': '0.911', 'D': '0.889', 'G': '0.811', 'F': '0.582', 'I': '0.617', 'H': '0.734', 'K': '0.862', 'M': '0.641', 'L': '0.65', 'N': '0.848', 'Q': '0.817', 'P': '0.866', 'S': '0.846', 'R': '0.807', 'T': '0.742', 'W': '0.609', 'V': '0.603', 'Y': '0.567'}
    b_factor_ff = {'A': '0.847', 'C': '0.67', 'E': '1.11', 'D': '1.055', 'G': '0.967', 'F': '0.653', 'I': '0.686', 'H': '0.894', 'K': '1.016', 'M': '0.74', 'L': '0.788', 'N': '0.901', 'Q': '1.007', 'P': '0.857', 'S': '0.914', 'R': '0.942', 'T': '0.862', 'W': '0.656', 'V': '0.707', 'Y': '0.741'}
    
    rigid = set(['A', 'C', 'F', 'I', 'H', 'M', 'L', 'W', 'V', 'Y'])
    flex = set(['E', 'D', 'G', 'K', 'N', 'Q', 'P', 'S', 'R', 'T'])
    
    ## get beta factors for each residue, according to neighbors
    
    b_factors = []
                    
    for r in range(0, len(prot)):
        b_query = False ## just making sure something doesn't go awry and an old value is used
        query = prot[r]
        if r == 0:
            b_query = b_factor[query]
        elif r == len(prot) - 1:
            b_query = b_factor[query]
        else:
            if prot[r - 1] in rigid:
                if prot[r + 1] in rigid:
                    b_query = b_factor_rr[query]
                elif prot[r + 1] in flex:
                    b_query = b_factor_rf[query]
            elif prot[r - 1] in flex:
                if prot[r + 1] in rigid:
                    b_query = b_factor_rf[query]
                elif prot[r + 1] in flex:
                    b_query = b_factor_ff[query]
        if b_query == False:
            print(r, 'bad b-factor!')
            exit()
        else:
            b_factors.append(b_query)
            
    ## average over 9 aa window
            
    b_factors_win = []
            
    for b in range(0, len(b_factors)):
        win = b_factors[b:b + 9]
        if len(win) == 9:
            b_win = sum(map(float, win)) / 9.0
            b_factors_win.append(b_win)
            
    return(b_factors_win)

Naturally we’d like to see how this compares with ProtParm.  Here’s a quick comparison using an aminopeptidase from the psychrophile Colwellia psychrerythraea 34H:

prot = 'MKHFSKLCFLLSTFAVSIAPVTWAHEGATHQHANVSKLTDAYTYANYDQVKATHVYLDLNVDFDKKSLSG'\
'FAELSLDWFTDNKAPLILDTRDLVIHRVMAKNSQGQWVKVNYDLAKRDDVLGSKLTINTPLNAKKVRVYY'\
'NSTEKATGLQWLSAEQTAGKEKPFLFSQNQAIHARSWIPIQDTPSVRVTYTARITTDKDLLAVMSANNEP'\
'GTERDGDYFFSMPQAIPPYLIAIGVGDLEFKAMSHQTGIYAESYILDAAVAEFDDTQAMIDKAEQMYGKY'\
'RWGRYDLLMLPPSFPFGGMENPRLSFITPTVVAGDKSLVNLIAHELAHSWSGNLVTNESWRDLWLNEGFT'\
'SYVENRIMEAVFGTDRAVMEQALGAQDLNAEILELDASDTQLYIDLKGRDPDDAFSGVPYVKGQLFLMYL'\
'EEKFGRERFDAFVLEYFDSHAFQSLGTDNFVKYLKANLTDKYPNIVSDNEINEWIFKAGLPSYAPQPTSN'\
'AFKVIDKQINQLVTDELTLEQLPTAQWTLHEWLHFINNLPVDLDHQRMVNLDKAFDLTNSSNAEIAHAWY'\
'LLSVRADYKEVYPAMAKYLKSIGRRKLIVPLYKELAKNAESKAWAVEVYKQARPGYHGLAQGTVDGVLK'

test1 = flexcalc(prot)

from Bio.SeqUtils.ProtParam import ProteinAnalysis as PA
test_seq = PA(prot)
test2 = PA.flexibility(test_seq)

import matplotlib.pyplot as plt

plt.scatter(test1[1:], test2)

Which produces the following.  ProtParam values on the y-axis and our values on the x-axis.  Note that the scales are different, they are arbitrary.  I haven’t figured out how to fit a linear model in Python yet (on the to do list), but the fit looks pretty good.  Plenty of scatter but the methods are in general agreement (which is good – we already have one paper published using the old method!).  Time to go back and redo a lot of work from the last two years…

figure_1

Posted in Research | 4 Comments