Category Archives: Biology

Research Technician in the Center for Genomics and Systems Biology, Dr. Kristin Gunsalus job with NEW YORK … – Times Higher Education

Description

The Center for Genomics and Systems Biology (CGSB) at New York University Abu Dhabi (NYUAD) is seeking to appoint a full-time Research Technician in the Chemical and Functional Genomics Laboratory under the supervision of Professor Kristin Gunsalus.

The laboratorys research addresses emergent concerns that pose considerable threats to human health and the environment by investigating novel bioactive agents from microbial sources and chemical libraries. Research projects employ High-Throughput (HTP) screening of different biological organisms and High-Content Phenotypic profiling (HCP) of mammalian cells to provide bioactive candidates for the development of novel solutions for contemporary health and environmental challenges. The Research technician will have an integral role within the team providing technical support for operational and research activities. In this role, the key responsibilities of the candidate will include:

The ideal candidate will have a Masters degree in cell & molecular biology or microbiology with at least 3 years of experience in a research lab. The candidate is expected to have hands-on experience in microbiology, molecular biology, cell culture and cell-based assays, with a detailed mind for optimizing assays and generating quality data. Experience in analytical chemistry techniques and natural product extraction from microbes is highly desirable. Experience with high-content or high-throughput screening and lab automation is a plus. The research technician must have a strong work ethic, excellent organizational and communication skills with a high level of proficiency in English, and the ability to work effectively in a team within a multi-disciplinary environment.

The terms of employment include a highly competitive salary, housing allowance, and other benefits. To be considered, all applications must be submitted through Interfolio and should include a cover letter, curriculum vitae, a one-page summary of research accomplishments and interests, two recommendation letters, all in PDF format. If you have any questions, please send your inquiries atnyuad.cgsb.chemgen@nyu.edu

About the CGSB:

The Center for Genomics and Systems Biology (CGSB) at New York University Abu Dhabi was established to provide a nexus for cutting-edge life sciences research in the United Arab Emirates, with world-class facilities and resources to promote innovative advances in genomics and systems biology. The Center fosters and enhances the research and training missions of NYUAD, where undergraduate students, graduate students, and postdoctoral scientists engage in research across disciplines, facilitated by advanced instrumentation and computational support for high-throughput data collection, visualization, and analysis. The NYUAD-CGSB operates in partnership with its sister center, NYU Biologys CGSB, in New York, in an open organizational framework that enables transformative collaborative work across the globe supported by joint technology and service platforms.

About NYUAD:

NYU Abu Dhabi is a degree-granting research university with a fully integrated liberal arts and science undergraduate program in the Arts, Sciences, Social Sciences, Humanities, and Engineering. NYU Abu Dhabi, NYU New York, and NYU Shanghai, form the backbone of NYUs global network university, an interconnected network of portal campuses and academic centers across six continents that enable seamless international mobility of students and faculty in their pursuit of academic and scholarly activity. This global university represents a transformative shift in higher education, one in which the intellectual and creative endeavors of academia are shaped and examined through an international and multicultural perspective. As a major intellectual hub at the crossroads of the Arab world, NYUAD serves as a center for scholarly thought, advanced research, knowledge creation, and sharing, through its academic, research, and creative activities.

EOE/AA/Minorities/Females/Vet/Disabled/Sexual Orientation/Gender Identity Employer

UAE Nationals are encouraged to appl

Equal Employment Opportunity Statement

For people in the EU, click here for information on your privacy rights under GDPR:www.nyu.edu/it/gdpr

NYU is an equal opportunity employer committed to equity, diversity, and social inclusion.

Continued here:

Research Technician in the Center for Genomics and Systems Biology, Dr. Kristin Gunsalus job with NEW YORK ... - Times Higher Education

The Road to Biology 2.0 Will Pass Through Black-Box Data – Towards Data Science

AI-first Biotech This year marks perhaps the zenith of expectations for AI-based breakthroughs in biology, transforming it into an engineering discipline that is programmable, predictable, and replicable. Drawing insights from AI breakthroughs in perception, natural language, and protein structure prediction, we endeavour to pinpoint the characteristics of biological problems that are most conducive to being solved by AI techniques. Subsequently, we delineate three conceptual generations of bio AI approaches in the biotech industry and contend that the most significant future breakthrough will arise from the transition away from traditional white-box data, understandable by humans, to novel high-throughput, low-cost AI-specific black-box data modalities developed in tandem with appropriate computational methods. 46 min read

This post was co-authored with Luca Naef.

The release of ChatGPT by OpenAI in November 2022 has thrust Artificial Intelligence into the global public spotlight [1]. It likely marked the first instance where even people far from the field realised that AI is imminently and rapidly altering the very foundations of how humans will work in the near future [2]. A year down the road, once the limitations of ChatGPT and similar systems have become better understood [3], the initial doom predictions ranging from the more habitual panic about future massive job replacement by AI to declaring OpenAI as the bane of Google, have given place to impatience why is it so slow?, in the words of Sam Altman, the CEO of OpenAI [4]. Familiarity breeds contempt, as the saying goes.

We are now seeing the same frenetic optimism around AI in the biological sciences, with hopes that are probably best summarised by DeepMind

Continued here:

The Road to Biology 2.0 Will Pass Through Black-Box Data - Towards Data Science

Nobel-winning biologist on the most promising ways to stop ageing – New Scientist

ANTI-AGEING is big business. From books encouraging diets such as intermittent fasting to cosmetic creams to combat wrinkles, a multibillion-dollar industry has been built on promises to make us live longer and look younger. But how close are we really to extending our lifespan in a way that gives us extra years of healthy life?

Nobel prizewinner Venki Ramakrishnan, a molecular biologist and former president of the UKs Royal Society, is the latest to tackle this question. He has spent 25 years studying the ribosome, which is where our cells make proteins using the information encoded in our genes, at the MRC Laboratory of Molecular Biology in Cambridge, UK.

In his latest book, Why We Die: The new science of ageing and the quest for immortality, he goes on a journey around the cutting-edge biology of human ageing and asks whether it will be possible to extend our lifespan in the near future.

He talks to New Scientist about the recent breakthroughs in our knowledge of what causes ageing, how close we are to creating therapeutics to combat it, and the potential consequences if we succeed.

Graham Lawton: What inspired you to take a break from a hugely successful career researching how cells make proteins to write a book about ageing?

Venki Ramakrishnan: Two things. One is that the translation of genetic code into proteins affects almost every biological process, and it turns out to be central to many aspects of ageing.

The other reason is that we have worried about ageing and death ever since we

See original here:

Nobel-winning biologist on the most promising ways to stop ageing - New Scientist

How to better research the possible threats posed by AI-driven misuse of biology – Bulletin of the Atomic Scientists

Over the last few months, experts and lawmakers have become increasingly concerned that advances in artificial intelligence could help bad actors develop biological threats. But so far there have been no reported biological misuse examples involving AI or the AI-driven chatbots that have recently filled news headlines. This lack of real-world wrongdoing prevents direct evaluation of the changing threat landscape at the intersection of AI and biology.

Nonetheless, researchers have conducted experiments that aim to evaluate sub-components of biological threatssuch as the ability to develop a plan for or obtain information that could enable misuse. Two recent effortsby RAND Corporation and OpenAIto understand how artificial intelligence could lower barriers to the development of biological weapons concluded that access to a large language model chatbot did not give users an edge in developing plans to misuse biology. But those findings are just one part of the story and should not be considered conclusive.

In any experimental research, study design influences results. Even if technically executed to perfection, all studies have limitations, and both reports dutifully acknowledge theirs. But given the extent of the limitations in the two recent experiments, the reports on them should be seen less as definitive insights and more as opportunities to shape future research, so policymakers and regulators can apply it to help identify and reduce potential risks of AI-driven misuse of biology.

The limitations of recent studies. In the RAND Corporation report, researchers detailed the use of red teaming to understand the impact of chatbots on the ability to develop a plan of biological misuse. The RAND researchers recruited 15 groups of three people to act as red team bad guys. Each of these groups was asked to come up with a plan to achieve one of four nefarious outcomes (vignettes) using biology. All groups were allowed to access the internet. For each of the four vignettes, one red team was given access to an unspecified chatbot and another red team was given access to a different, also unspecified chatbot. When the authors published their final report and accompanying press release in January, they concluded that large language models do not increase the risk of a biological weapons attack by a non-state actor.

This conclusion may be an overstatement of their results, as their focus was specifically on the ability to generate a plan for biological misuse.

The other report was posted by the developers of ChatGPT, OpenAI. Instead of using small groups, OpenAI researchers had participants work individually to identify key pieces of information needed to carry out a specific defined scenario of biological misuse. The OpenAI team reached a conclusion similar to the RAND teams: GPT-4 provides at most a mild uplift in biological threat creation accuracy. Like RAND, this also may be an overstatement of results as the experiment evaluated the ability to access information, not actually create a biological threat.

The OpenAI report was met with mixed reactions, including skepticism and public critique regarding the statistical analysis performed. The core objection was the appropriateness of the use of a correction during analysis that re-defined what constituted a statistically significant result. Without the correction, the results would have been statistically significantthats to say, the use of the chatbot would have been judged to be a potential aid to those interested in creating biological threats.

Regardless of their limitations, the OpenAI and RAND experiments highlight larger questions which, if addressed head-on, would enable future experiments to provide more valuable and actionable results about AI-related biological threats.

Is there more than statistical significance? In both experiments, third-party evaluators assigned numeric scores to the text-based participant responses. The researchers then evaluated if there was a statistically significant difference between those who had access to chatbots and those who did not. Neither research team found one. But typically, the ability to determine if a statistically significant difference exists largely depends on the number of data points; more data points allow for a smaller difference to be considered statistically significant. Therefore, if the researchers had many more participants, the same differences in score could have been statistically significant.

Reducing text to numbers can bring other challenges as well. In the RAND study, the teams, regardless of access to chatbots, did not generate any plans that were deemed likely to succeed. However, there may have been meaningful differences in why the plans were not likely to succeed, and systematically comparing the content of the responses could prove valuable in identifying mitigation measures.

In the OpenAI work, the goal of the participants was to identify a specific series of steps in a plan. However, if a participant were to miss an early step in the plan, all the remaining steps, even if correct, would not count towards their score. This meant that if someone made an error early on, but identified all the remaining information correctly, they would score similarly to someone who did not identify any correct information. Again, researchers may gain insight from identifying patterns in which steps and why participants failed.

Are the results generalizable? To inform an understanding of the threat landscape, conclusions must be generalizable across scenarios and chatbots. Future evaluators should be clear on which large language models are used (the RAND researchers were not). It would be helpful to understand if researchers achieve a similar answer with different models or different answers with the same model. Knowing the specifics would also enable comparisons of results based on the characteristics of the chatbot used, enabling policymakers to understand if models with certain characteristics have unqiue capabilities and impact.

The OpenAI experiment used just one threat scenario. There is not much reason to believe that this one scenario is representative of all threat scenarios; the results may or may not generalize. There is a tradeoff in using one specific scenario; it becomes tenable for one or two people to evaluate 100 responses. On the other hand, the RAND work was much more open-ended as participant teams were given flexibility in how they decided to achieve their intended goal. This makes the results more generalizable, but required a more extensive evaluation procedure that involved many experts to sufficiently examine 15 diverse scenarios.

Are the results impacted by something else? Part way through their experiment, the RAND researchers enrolled a black cell, a group with significant experience with large language models. The RAND researchers made this decision because they noticed that some of their studys red teams were struggling to bypass safety features of the chatbots. In the end, the black cell received an average score almost double that of the corresponding red teams. The black cell participants didnt need to rely only on their expertise with large language models; they were also adept at interpreting the academic literature about those models. This provided a valuable insight to the RAND researchers, which is [t]herelative outperformance of the black cell illustrates that a greater source of variability appears to be red team composition, as opposed to LLM access. Simply put, it probably matters more who is on the team than if the team has access to a large language model or not.

Moving forward. Despite their limitations, red teaming and benchmarking efforts remain valuable tools for understanding the impact of artificial intelligence on the deliberate biological threat landscape. Indeed, the National Institute of Standards and Technologys Artificial Intelligence Safety Institute Consortiuma part of the US Department of Commercecurrently has working groups focused on developing standards and guidelines for this type of research.

Outside of technical design and execution of the experiments, challenges remain. The work comes with meaningful financial costs including the compensation of participants for their time (OpenAI pays $100 per hour to experts); for indviduals to recruit participants, design experiments, administer the experiments, and analyze data; and of biosecurity experts to evaluate the responses. Therefore, it is important to consider who will fund this type of work in the future. Should artificial intelligence companies fund their own studies, a perceived conflict of interest will linger if the results are intended to be used to inform governance or public perception of their models risks. But at the same time, funding that is directed to nonprofits like RAND Corporation or to academia does not inherently enable researchers access to unreleased or modified models, like the version used in the OpenAI experiment. Future work should learn from these two reports, and could benefit from considering the following:

The path toward more useful research on AI and biological threats is hardly free of obstacles. Employees at the National Institute of Standards and Technology have reportedly expressed outrage regarding the recent appointment of Paul Christianoa former OpenAI researcher who has expressed concerns that AI could pose an existential threat to humanityto a leadership role at the Artificial Intelligence Safety Institute. Employees are concerned that Christianos personal beliefs about catastrophic and extistential risk posed by AI broadly will affect his ability to maintain the National Institute of Standards and Technologys commitment to objectivity.

This internal unrest comes on the heels of reporting that the physical buildings that house the institute are falling apart. As Christiano looks to expand his staff, he will also need to compete against the salaries paid by tech companies. OpenAI, for example, is hiring for safety-related roles with the low end of the base salary exceeding the high end of the general service payscale (federal salaries). It is unlikely that any relief will come from the 2024 federal budget, as lawmakers are expected to decrease the institutes budget from 2023 levels. But if the United States wants to remain a global leader in the development of artificial intelligence, it will need to make financial commitments to ensure that the work required to evaluate artificial intelligence is done right.

See the rest here:

How to better research the possible threats posed by AI-driven misuse of biology - Bulletin of the Atomic Scientists

Fired Biology Professor Fights Back and Wins, Has a Message For Fellow Christians – CBN.com

A Texas college professor who said he was fired after university leaders reportedly found his teachings too religious has been reinstated to his position more than a year after being terminated.

Listen to them on the latestepisodeof Quick Start

Dr. Johnson Varkey and his attorneys at First Liberty Institute recently announced Varkey had won his adjunct professorial job back at St. Philips College in San Antonio, Texas, after being fired in 2023 for teaching standard principles about human biology and reproduction.

The announcement comes after a favorable settlement was reached with the Alamo Community College District; the school system voluntarily reinstated Varkey.

I was so excited, the professor told CBN News after the announcement. And thank the Lord for that outcome.

Varkey said hes grateful to First Liberty and to the Lord for helping him get back his position.

I am excited to go back and teach, he said.

Watch Varkey share his story:

As CBN News previously reported, a biology professor for the past 20 years, Varkey consistently taught the same facts about the human reproductive system without any problems. But that changed last year, when he received a notice of dismissal.

I was surprised and I was shocked, because, you know, never expected for such a letter from, or such an email from, the school because Ive been teaching that for that school for the last 20 years and without any complaints, he said.

Varkey believes his lessons on human biology and sex being determined by chromosomes X and Y sparked complaints leading to his dismissal.

On the 12th of January [of 2023], I received an email from the vice president of the department of the school that they are doing an ethic violation investigation on me, so I responded to his email and asked him, What are the complaints?' Varkey said. So, what he said was the human resources will contact me.

The professor said, although he asked about complaints, he received no response from HR and didnt get a chance to defend himself before his firing.

When I got the letter of termination, what the VP mentioned was that some of the complaints were offensive to the homosexuals and transgender, he said. So, I presume that, very possibly, it is based on a human reproductive system, which I taught, which was in November [2022].

Varkey and his attorney believe he was potentially unfairly targeted by some who disagreed with his views or outside work as a pastor.

First Liberty filed a complaint with the Equal Employment Opportunity Commission (EEOC), but before receiving an official response, St. Philips College issued the reinstatement.

Kayla Toney, associate Counsel at First Liberty Institute, told CBN News her legal firm was initially outraged after initially finding out about Varkeys plight, noting the fact his speech had been targeted was deeply concerning.

There were many First Amendment violations that we saw right away, first his religious exercise, Toney said. He is a committed Christian; hes also a pastor in addition to his role as a biology professor. So, we think there was some religious targeting by students who knew he was a pastor and thought they could accuse him of something like religious preaching.

The attorney said her client was simply teaching straight from the textbook and following those lesson plans that he had used for 19 years.

Beyond that, Toney cited academic freedom concerns at the center of the case, arguing professors like Varkey should be free to teach from textbooks without fear of reprisal.

We also saw some due process concerns that Dr. Barkey never had the opportunity to meet with the students who were upset, or learn what their complaints were, or have a conversation, even with his supervisor, which was part of the colleges own procedures for a complaint or a termination, she said.

Varkey could return to the classroom as soon as this spring, with Toney noting the EEOC complaint will be dismissed as part of the settlement pending he is able to teach again as planned.

The professor said he hopes his successful quest to fight for his job will inspire other Christians who might face similar issues and barriers.

I would say, dont quit, because there are people very supportive just like First Liberty, he said, urging people to be brave and take a stand. Stand for the truth.

Original post:

Fired Biology Professor Fights Back and Wins, Has a Message For Fellow Christians - CBN.com

Understanding Reductionism and ID – Discovery Institute

Photo: Monarch butterfly, by liz west from Boxborough, MA [CC BY 2.0 ], via Wikimedia Commons.

The burgeoning field of systems biology, as defined by the National Institutes of Health (NIH),

is an approach in biomedical research to understanding the larger picture be it at the level of the organism, tissue, or cell by putting its pieces together. Its in stark contrast to decades of reductionist biology, which involves taking the pieces apart.

Im sure that statement is designed to make systems biology sound radical and exciting, and it succeeds. Its especially exciting for proponents of intelligent design, because ID theorists have been arguing against reductionism in biology for a long time.

But we need to be careful. We dont want to make an argument based on an equivocation. The word reductionism is thrown around a lot, but it can mean several different things. Its not as simple as saying, Biologists are learning that reductionism is bad!

As it turns out, the move away from reductionism in systems biologyissignificant for the ID debate, but not simply by word-association. So I want to take some time to suss out the different meanings of the word reductionism and what they have to do with intelligent design.

There are two kinds of reductionism that are relevant to this discussion:methodological reductionismandontological reductionism. (For a third kind,epistemological reductionism,see this cartoon.) The opposing philosophies are, respectively, methodologicalantireductionism and ontologicalantireductionism. The terms are a bit eye-splitting, but they arent difficult to understand.

Methodological reductionism is the idea that a thing can best be understood by breaking it down into its parts. The contrary philosophy, methodological antireductionism, says that a thing can be best understood by looking at it as a whole.

The opposing views are summed up nicely in a conversation between the wizards Saruman and Gandalf inThe Lord of the Rings. Saruman shows Gandalf his new rainbow-colored outfit and tells him that he has decided to stop going by Saruman the White and go by Saruman of Many Colours instead.

I liked white better, says Gandalf.

White! Saruman sneers. It serves as a beginning. White cloth may be dyed. The white page can be overwritten; and the white light can be broken.

In which case it is no longer white, says Gandalf. And he that breaks a thing to find out what it is has left the path of wisdom.

Saruman is a methodological reductionist and Gandalf is a methodological antireductionist.

Methodological reductionism:The white light can be broken.

Methodological antireductionism:He that breaks a thing to find out what it is has left the path of wisdom.

Ontological reductionism, on the other hand, is not about the best way to study something, but rather about what that thing really is at the deepest level. Ontological reductionism says that a thing can be reduced to its most basic parts, and thats what it is nothing more. According to this theory, a tree is a collection of cells, which in turn are collections of molecules, which are collections of atoms, which are collections of subatomic particles. So in the final analysis, a tree is a collection of subatomic particles.

This view, and its antithesis, is expressed in C. S. LewissVoyage of the Dawn Treader. On an island near the edge of the world, the characters meet a being named Ramandu who claims to be a star.

In our world, Eustace Scrubb objects, a star is a huge ball of flaming gas.

Even in your world, my son, replies Ramandu, that is not what a star is but only what it is made of.

Eustace is an ontological reductionist and Ramandu is an ontological antireductionist. (And if Ramandus statement seems mind-bending or baffling, thats because most of us were educated into ontological reductionism.)

Ontological reductionism:A star is a huge ball of flaming gas.

Ontological antireductionism: That is not what a star is but only what it is made of.

The field of systems biology ismethodologicallyantireductionist. It does not have to be ontologically antireductionist. So, systems biologists do not necessarily reject materialism or physicalism. They do not have to believe in minds, or be willing to posit neo-Platonic souls of cabbages, or think the true meaning of a mushroom can only be found in its wholeness.

They have simply found it to be the case that looking at living organisms as complete systems yields better results than only taking them apart to focus on their bare components. Researchers are coming to realize that it is more productive to think about the plan of an organism than simply about its physical structure or components.

But this is important, because whether systems biologists always admit it or not, methodological antireductionismimpliesontological antireductionism. Gandalf agrees with Ramandu, not Eustace.

Thats not to say that ontological antireductionism logically follows from methodological antireductionism, or vice versa. In theory, you could have one without the other. But the success of methodological antireductionism fulfills a prediction of the hypothesis of ontological antireductionism.

That is: if there really is a plan, then you would naturally suppose that looking for a plan would turn out to be a great strategy, and that proceeding as if there were no plan would not be a great strategy. And that is the reality. It turns out that when you take a creature apart to see what its parts are, you see a bunch of parts; but when you take a step back and look for a plan, you find a plan.

Intelligent design is a sub-type of ontological antireductionism. To be exact, it is one way of answering the question if a thing isnt just the sum of its parts, then whatisit? ID proposes that (at least some) natural entities are more than the sum of their parts because they are ultimately an expression of an idea in a conscious mind. If this is true, then you would predict those entities to be best understood by grasping the idea behind them; you would try to see the scheme, the purpose, the outline, the plan.

The neo-Darwinian model, in contrast, does not inherently lead to this prediction, because the mechanism of natural selection and random variation is, by definition, anuncoordinatedpiling-up of useful features, whereas a plan is thecoordinationof useful features. (Michael Behes three books andMarcos EberlinsForesightexplore this idea in depth.)

This is not proof of the design hypothesis, but itisevidence for it. In fact, this sort of evidence is one of the pillars of the scientific method: the strength of a scientific hypothesis depends on its ability to make predictions that are borne out by investigation. Based on that criterion, the hypothesis of intelligent design is doing very well. The hypothesis of mindless evolution is not doing so well, because although mindless processes might generate great complexity, they do not make plans.

Some systems biologists may want to reject Saruman but stay with Eustace; to reap the practical benefits of methodological antireductionism while avoiding the philosophical costs. But they may find that stance difficult to maintain. An unwary systems biologist could easily drift over to Ramandus Island, where the ID theorists are waiting.

View post:

Understanding Reductionism and ID - Discovery Institute

All creatures great and small: Sequencing the blue whale and Etruscan shrew genomes – University of Wisconsin-Madison

Illustration: Beth Atkinson

Size doesnt matter when it comes to genome sequencing in the animal kingdom, as a team of researchers at the Morgridge Institute for Research recently illustrated when assembling the sequences for two new reference genomes one from the worlds largest mammal and one from one of the smallest.

The blue whale genome was published in the journal Molecular Biology and Evolution, and the Etruscan shrew genome was published in the journal Scientific Data.

Research models using animal cell cultures can help navigate big biological questions, but these tools are only useful when following the right map.

The genome is a blueprint of an organism, says Yury Bukhman, first author of the published research and a computational biologist in the Ron Stewart Computational Group at the Morgridge Institute, an independent research organization that works in affiliation with the University of WisconsinMadison in emerging fields such as regenerative biology, metabolism, virology andbiomedical imaging. In order to manipulate cell cultures or measure things like gene expression, you need to know the genome of the species it makes more research possible.

The Morgridge teams interest in the blue whale and the Etruscan shrew began with research on the biological mechanisms behind the developmental clock from James Thomson, emeritus director of regenerative biology at Morgridge and longtime professor of cell and regenerative biology in the UW School of Medicine and Public Health. Its generally understood that larger organisms take longer to develop from a fertilized egg to a full-grown adult than smaller creatures, but the reason why remains unknown.

Its important just for fundamental biological knowledge from that perspective. How do you build such a large animal? How can it function? says Bukhman.

Bukhman suggests that a practical application of this knowledge is in the emerging area of stem cell-based therapies. To heal an injury, stem cells must differentiate into specialized cell types of the relevant organ or tissue. The speed of this process is controlled by some of the same molecular mechanisms that underlie the developmental clock.

Understanding the genomes of the largest and smallest of mammals may also help unravel the biomedical mystery known as Petos paradox. This is a curious phenomenon in which large mammals such as whales and elephants live longer and are less likely to develop cancer often caused by DNA replication errors that occasionally happen during cell division despite having a greater number of cells (and therefore more cell divisions) than smaller mammals like humans or mice.

Meanwhile, knowledge of the Etruscan shrew genome will enable new insights in the field of metabolism. The shrew has an extremely high surface to volume ratio and fast metabolic rate. These high energy demands are a product of its tiny size no bigger than a human thumb and weighing less than a penny making it an interesting model to better understand regulation of metabolism.

The blue whale and Etruscan shrew genome projects are part of a large collaborative effort involving dozens of contributors from institutions across North America and several European countries, in conjunction with the Vertebrate Genomes Project.

The mission of the VGP is to assemble high-quality reference genomes for all living vertebrate species on Earth. This international consortium of researchers includes top experts in genome assembly and curation.

The VGP has established a set of methods and criteria for producing a reference genome, Bukhman says. Accuracy, contiguity, and completeness are three measures of quality.

Previous methods to sequence genomes used short read technologies, which produce short lengths of the DNA sequence 150 to 300 base pairs long, called reads. Overlapping reads are then assembled into longer contiguous sequences, called contigs.

Contigs assembled from short reads tend to be relatively small compared to mammalian chromosomes. As a result, draft genomes reconstructed from such contigs tend to be very fragmented and have a lot of gaps.

Instead, the team used long read sequencing, with reads around 10,000 base pairs in length, with the principal advantage being longer contigs and fewer gaps.

Then you can use other methods such as optical mapping and Hi-C to assemble contigs into bigger structures called scaffolds, and those can be as big as an entire chromosome, Bukhman explains.

The researchers also analyzed segmental duplications, large regions of duplicated sequence that often contain genes and can provide insight into evolutionary processes when compared to other species, either closely or distantly related.

They found that the blue whale had a large burst of segmental duplications in the recent past, with larger numbers of copies than the bottlenose dolphin and the vaquita (the worlds smallest cetacean, the order of mammals including whales, dolphins and porpoises). While most of the copies of genes created this way are likely non-functional, or their function is still unknown, the team did identify several known genes.

One encodes the protein metallothionein, which is known to bind heavy metals and sequester their toxicity a useful mechanism for large animals that accumulate heavy metals while living in the ocean.

A reference genome is also useful for species conservation. The blue whale was hunted almost to extinction in the first half of the 20th century. It is now protected by an international treaty and the populations are recovering.

In the worlds oceans, the blue whale is basically everywhere except for the high Arctic. So, if you have a reference genome, then you can make comparisons and can better understand the population structure of the different blue whale groups in different parts of the globe, Bukhman says. The blue whale genome is highly heterozygous, theres still a lot of genetic diversity, which has important implications for conservation.

Which begs the question: how do you go about acquiring samples from a large, endangered creature that exists in the vastness of the oceans?

The logistics posed several challenges, including the fact that blue whale sightings in our area are very rare and almost unpredictable, says Susanne Meyer, a research specialist at the University of California Santa Barbara, who spent over a year to coordinate the permits, personnel and resources needed to procure the samples.

Once their local whale-watching team determined the timing and coordinates of the whale sightings, they brought in licensed whale researcher Jeff K. Jacobsen to perform the whale biopsies using an approved standard cetacean skin biopsy technique, which involves a custom stainless steel biopsy tube fitted to a crossbow arrow.

The team acquired samples from four blue whales, which Meyer used to develop and expand fibroblasts in cell culture for the genome sequencing and further research use.

While the Etruscan shrew genome wasnt studied as extensively as the blue whale genome, the team reported an interesting finding.

We found that there are relatively few segmental duplications in the shrew genome, Bukhman says, while emphasizing that this result does not necessarily correlate to the diminutive size of the shrew itself. While shrews belong to a different mammalian order, some similarly small rodents have lots of segmental duplications, and the house mouse is kind of a champion in that sense that it has the most. So, its not a matter of size.

As the Vertebrate Genomes Project makes strides in producing more high-quality reference genomes for all vertebrates, Bukhman is hopeful that contributions to those efforts will continue to advance biological research in the future.

These studies were supported by grants from the National Science Foundation (2046753, DBI2003635, DBI2146026, IIS2211598, DMS2151678, CMMI1825941 and MCB1925643) and National Institutes of Health (R01GM133840).

Read more:

All creatures great and small: Sequencing the blue whale and Etruscan shrew genomes - University of Wisconsin-Madison

Seeing Double: USU Biologist Carl Rothfels is Developing Novel Polyploid Phylogenetics Tools – Utah State University

Humans have 23 pairs of chromosomes and 46 total chromosomes. Half come from your mother and the other half come from your father. Were a diploid species, meaning most of our chromosomes come in matched sets.

Plants are a different story. Unlike in animals, polyploidy having more than two sets of chromosomes is very common among plants.

Polyploidy, says Utah State University plant biologist Carl Rothfels, is a dominant feature of existing plant species and appears to be a driver of plant diversity. Advances in genomics techniques, including CRISPR, are fueling study of the phenomenon, he says, yet polyploid phylogenetics, the study of the evolutionary history and relationships among and within polyploid plant groups, lags behind.

Rothfels was awarded a National Science Foundation Faculty Early Career Development Program (CAREER) grant to develop phylogenetic tools and apply them to the fern family Cystopteridaceae, including fragile fern and oak fern, commonly found in the Intermountain West.

This plant family provides an excellent system for investigating big questions about polyploidy and its role in evolution and diversity, says Rothfels, director of USUs Intermountain Herbarium and associate professor in the Department of Biology and USU Ecology Center. Is polyploidy an important generator of diversity and thus an engine of evolutionary success? Or, is it an evolutionary dead end, forming new species that go extinct more quickly?

Unraveling these mysteries is a formidable task, he says, involving collection of hard-to-get data and developing, from the ground up, complex analytical tools.

When evolution happens in a purely branching way, you can construct a family tree, Rothfels says. But polyploids mess up this system and make it harder because many polyploids are also hybrids, so their history is more of a reticulated network a web of life instead of a tree of life.

He explains three primary steps involved in a polypoid phylogenetics study none of which is simple and all of which present their own challenges.

The first step involves sequencing each copy of the target locus or set of loci present in a polyploid sample, and accurately reconstructing each distinct sequence from each subgenome.

The second step is to determine which subgenome each copy came from so that subgenome histories can be accurately reconstructed.

Step three, for which Rothfels is developing mathematical model, requires inferring polyploid evolutionary histories, which twist and turn in unexpected paths.

In pursuit of these aims, Rothfels is enlisting data collection help from an army of students and citizen scientists through the online iNaturalist network, along with a newly established annual botany trip known as the Intermountain Botanical Foray.

We held our first foray in June 2023, and the plan is to hold this yearly trip, open to plant-fans of all walks of life, in a different Intermountain location each year, he says. This past year, we visited the Desert Experimental Range in Millard County in southwestern Utah, where we logged more than 1,500 iNaturalist observations covering more than 200 species of plants.

In tandem with this effort, Rothfels is developing a curriculum in field botany to foster undergraduate learning, which he has introduced to students from USU Blanding who are participants in the yearly Native American Summer Mentorship Program.

Our lab is working with USUs NASMP and MESAS Mentoring and Encouraging Student Academic Success programs, to get students involved in study of plants, including Indigenous knowledge, he says. Our research is a collective effort, but not just on scientific progress, but on community building as well.

Visit link:

Seeing Double: USU Biologist Carl Rothfels is Developing Novel Polyploid Phylogenetics Tools - Utah State University

Department of Biology Special Seminar: Angela Hancock – The Hub at Johns Hopkins

Description

Angela Hancock, an independent group leader at the Max Planck Institute for Plant Research, will give a talk titled "Molecular Mechanisms of Adaptation to Novel Environments" for the Biology Department.

An organism's metabolism and growth are determined by a complex interplay of environmental signals and interacting molecular pathways. Angela Hancock's lab investigates how molecular response systems evolve in new and changing environments. They combine population genetics, functional genomics, computational modeling, and gene editing to deconstruct and reconstruct the molecular steps that enable adaptation to extreme environments.

This is a hybrid event; to attend virtually, use the Zoom link.

Read the original post:

Department of Biology Special Seminar: Angela Hancock - The Hub at Johns Hopkins

New Imaging Tool Advances Study of Lipid Biology – University of California San Diego

From fruit flies to humans, there are many, many different types and subtypes of lipids operating at the same time within any living organism. For example, the plasma portion of human blood is home to 600 different types of lipids. While we know that lipid molecules play myriad different roles in health, aging and disease; researchers currently struggle to uncover the fine details of these roles details which could unlock cures, extend the human healthspan, and solve mysteries of aging.

One big challenge is connected to the fact that multiple subtypes of lipids are often found within the very same cells. At the same time, there is no ideal tool for identifying and tracking the activity of specific lipids within individual cells. While, there is a long list of existing techniques that are used to try to answer some of the toughest questions about the roles of specific lipid subtypes in individual cells and tissues all current techniques have drawbacks.

Now, a study led by bioengineers at the University of California San Diego marks a significant step forward in this critical area of lipid research. The new work was published on February 21, 2024 in the journal Nature Communications. In this paper, the research team presents what they believe is the first method for distinguishing multiple lipid subtypes in cells and tissue samples by using nondestructive label-free optical imaging methods.

The new light-based imaging tool is called a hyperspectral imaging platform or PRM-SRS. The new capabilities of the PRM-SRS platform are due, in part, to the fact that the imaging platform integrates a Penalized Reference Matching (PRM) algorithm with Stimulated Raman Scattering (SRS) microscopy.

Use of this platform could profoundly deepen researchers ability to understand the roles that different lipid subtypes are playing in a wide range of cells and tissues. This is particularly relevant because traditional imaging methods often fall short in capturing the intricate spatial distributions and metabolic dynamics of lipid subtypes, hampering efforts to unravel their significance in aging and various diseases.

"Our PRM-SRS platform represents a paradigm shift in lipid imaging, offering unparalleled capabilities in distinguishing lipid subtypes within complex biological environments," said UC San Diego bioengineering professor Lingyan Shi, the senior corresponding author. As far as we know, we are presenting the first method for distinguishing multiple lipid subtypes in cells and tissue samples by using nondestructive label-free optical imaging methods. Shi was selected as anAlfred P. Sloan Research Fellow in 2023.

As described in the Nature Communications paper, the researchers harnessed PRM-SRS to visualize and identify distinct lipid subtypes across different organs and species including: high density lipoprotein particles in human kidneys; cholesterol-rich granule cells in mouse hippocampus; and subcellular distributions of two lipids (sphingosine and cardiolipin) in the human brain. In these demonstration cases, the PRM-SRS imaging platform unveiled unprecedented insights with enhanced chemical specificity (which is used to identify lipid subtypes) and subcellular resolution.

Multiplexed visualization: Unlike conventional methods, PRM-SRS enables the simultaneous visualization of multiple lipid subtypes from single label-free hyperspectral imaging (HSI) sets, expanding the scope of lipid research.

Diagnostic potential: Preliminary analyses of human kidney tissue samples suggest a potential application of PRM-SRS in diagnosing and prognosing renal diseases, such as offering non-invasive insights into dyslipidemia-associated conditions.

Neurological insights: By mapping lipid distributions in mouse and human brain tissues, PRM-SRS sheds light on the role of lipid metabolism in neurological disorders, paving the way for targeted therapeutic interventions.

Future directions: With its versatility and ease of implementation, PRM-SRS holds promise for diverse applications, from high-throughput studies to deep learning-enhanced imaging techniques, fostering a new era of multiplex cell and tissue imaging.

Multi-molecular hyperspectral PRM-SRS microscopyinNature Communications /https://doi.org/10.1038/s41467-024-45576-6

This project is led by researchers in the Shu Chien-Gene Lay Department of Bioengineering at the UC San Diego Jacobs School of Engineering. This collaborative project includes important contributions from researchers from Northwestern University School of Medicine, the UC Irvine School of Medicine, Duke University School of Medicine, Duke Universitys School of Medicine and Department of Biomedical Engineering, and Washington University in St. Louis.

Paper authors: Wenxu Zhang, Yajuan Li, Anthony A. Fung, Zhi Li, Hongje Jang, Honghao Zha, Xiaoping Chen, Fangyuan Gao, Jane Y. Wu, Huaxin Sheng, Junjie Yao, Dorota Skowronska-Krawczyk, Sanjay Jain, Lingyan Shi

The researchers report no conflict of interest.

UC San Diego Startup funds, NIH R01GM149976, NIH U01AI167892, NIH 5R01NS111039,NIH R21NS125395, NIH U54DK134301, NIHU54 HL165443, NIH U54CA132378, and a Hellman Fellow Award. We are grateful for the support of the Washington University Kidney Translational Research Center (KTRC) for kidney samples and the HuBMAP grant U54HL145608. We thank Dr. E. Bigio and Dr. M.-M. Mesulam from Mesulam Center for Cognitive Neurology and Alzheimers Disease (MCCNAD) for providing the de-identified autopsy brain samples; and MCCNAD is supported by NIH P30 AG013854. Work done in D.S.K. laboratory is supported by NEI P30 grant, P30EY034070-01 and in part by an unrestricted grant from Research to Prevent Blindness awarded to the Gavin Herbert Eye Institute.

Human BioMolecular Atlas Program This paper is also collected by the NIH HubMAP programs Nature series publication collection here:Human BioMolecular Atlas Program (nature.com)

See original here:

New Imaging Tool Advances Study of Lipid Biology - University of California San Diego