Information

How does one calculate Rx/a in population genetics?

How does one calculate Rx/a in population genetics?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

The ratio of sex-linked to autosome mutation frequencies (R) is Rx/a =(2/3)(2+ α)/(1+ α) for X-linked genes and Ry/a = 2 α /(1+ α) for Y-linked genes. How does one calculate Rx/a, in order to find α?


Genomic Analysis Reveals Hypoxia Adaptation in the Tibetan Mastiff by Introgression of the Gray Wolf from the Tibetan Plateau

Benpeng Miao, Zhen Wang, Yixue Li, Genomic Analysis Reveals Hypoxia Adaptation in the Tibetan Mastiff by Introgression of the Gray Wolf from the Tibetan Plateau, Molecular Biology and Evolution, Volume 34, Issue 3, March 2017, Pages 734–743, https://doi.org/10.1093/molbev/msw274

The Tibetan Mastiff (TM), a native of the Tibetan Plateau, has quickly adapted to the extreme highland environment. Recently, the impact of positive selection on the TM genome was studied and potential hypoxia-adaptive genes were identified. However, the origin of the adaptive variants remains unknown. In this study, we investigated the signature of genetic introgression in the adaptation of TMs with dog and wolf genomic data from different altitudes in close geographic proximity. On a genome-wide scale, the TM was much more closely related to other dogs than wolves. However, using the ‘ABBA/BABA’ test, we identified genomic regions from the TM that possibly introgressed from Tibetan gray wolf. Several of the regions, including the EPAS1 and HBB loci, also showed the dominant signature of selective sweeps in the TM genome. We validated the introgression of the two loci by excluding the possibility of convergent evolution and ancestral polymorphisms and examined the haplotypes of all available canid genomes. The estimated time of introgression based on a non-coding region of the EPAS1 locus mostly overlapped with the Paleolithic era. Our results demonstrated that the introgression of hypoxia adaptive genes in wolves from the highland played an important role for dogs living in hypoxic environments, which indicated that domestic animals could acquire local adaptation quickly by secondary contact with their wild relatives.


Background

Pepper are members of the genus Capsicum, which originated in South America and represents one of the most economically important vegetable crops worldwide [1,2,3]. To date, 38 species of Capsicum have been reported (USDA-ARS, 2011). Of these, C. annuum, C. frutescens, C. chinense, C. baccatum, and C. pubescens are thought to have been domesticated [4]. Globally, the most predominant species is C. annuum, which has numerous commercial varieties varying greatly in size, shape, pungency, and color.

As the seed trade has developed and globalized, the commercial quality of seeds, which is based on authenticity and purity, has become increasingly important [5]. Traditionally, cultivar characterization was completed by field investigation of morphological traits however, this process is time-consuming and labor-intensive and is thus not suitable for modern inspection demands [6]. A more high-throughput approach to distinguish varieties is the used of molecular markers [5]. Indeed, genetic markers have been used for DNA fingerprinting, diversity analysis, variety identification, and marker-assisted breeding of multiple commercial crops [7, 8]. Moreover, several PCR-based tools have been used to detect genetic diversity in peppers, including random amplified polymorphic (RAPD), restriction fragment length polymorphism (RFLP), and amplified fragment length polymorphism (AFLP) [9,10,11,12].

Recently, the genomes of two C. annuum cultivars, Zunla-1 and CM334, were sequenced [3, 13], which provided an important platform for the detection and development of genome-wide simple sequence repeats (SSR) and insertion or deletion (InDel) markers [14,15,16,17,18,19,20]. Although a large number of SSR and InDel markers have become available, these technologies are not suitable for large scale germplasm characterization. Thus, there is an unmet need for an efficient, rapid, and high-throughput system capable of characterizing thousands of germplasm.

One approach for meeting such high standards is the use of single-nucleotide polymorphisms (SNPs), which are good markers for genotyping because of their whole genome coverage and primarily biallelic nature. Accordingly, multiple high-throughput SNP genotyping platforms have been developed, including the GoldenGate [21] and Infinium [22], TaqMan [23], and KASPar platform (KBiosciences, www.kbioscience.co.uk). In recent years, high-throughput transcriptome sequencing and genotyping-by-sequencing (GBS) have been successfully used in pepper, generating highly informative genome-wide SNP data [24,25,26,27,28,29,30]. However, SNP marker genotyping is considered expensive as it requires a comprehensive technical platform and special equipment and reagents.

Genotyping by target sequencing (GBTS) is a targeted sequence-capture strategy that can genotype more than thousands of SSRs or SNPs using high-throughput sequencing technology. The two main types of GBTS are multiplex PCR and probe-in-solution-based target sequencing the technology has been commercialized as AmpliSeq [31], NimbleGen [32], SureSelect [33], GenoBaits, and GenoPlexs [34]. To date, this technology has been widely used for medical applications but has rarely been used for agriculture species. However, a Target SSR-seq technique, which is a multiplex PCR-based approach, was successfully applied to the study of genetic diversity and structure in 382 cucumber varieties [35]. The results of this study demonstrated that GBTS is a customizable, flexible, high-throughput, low cost, and accurate sequencing tool.

Peppers from China constitute one-third of the world’s pepper production [36]. Until now, the genetic diversity of pepper accessions in China has primarily been investigated using SSR markers, but these surveys only examined either several Chinese germplasm (up to 32) [37] or a small number of SSR markers (up to 28) [36]. However, high-throughput SNP platforms used for genotyping and the identification of pepper varieties have lagged significantly behind those for SSRs, and studies on the genetic diversity between the varieties of peppers in China has not yet been extensively analyzed. Therefore, the main objectives of the present work were: 1) to develop a Target SNP-seq technique suitable for genotyping pepper varieties 2) to characterize composite core-SNP markers for use with the KASPar platform to maximize variety identification 3) to examine the level of genetic diversity, structure, and differentiation within 271 pepper varieties. This study demonstrated that a novel Target SNP-seq can be used as a rapid and efficient tool for genotyping peppers, and the genetic structure of these cultivated varieties have been strongly impacted by breeding programs that select for fruit shapes.


REAL-LIFE APPLICATIONS

Circadian Rhythms

The term circadian derives from the Latin circa ("about") and dies ("day"), and, indeed, it takes "about" a day for the body to undergo its entire cycle of serotonin-melatonin conversions. In fact, the cycle takes almost exactly 25 hours. Why 25 hours and not 24? This is a fascinating and perplexing question.

It would be reasonable to assume that natural selection favors those organisms whose body clocks correspond to the regular cycles of Earth's rotation on its axis, which governs the length of a day — or, more specifically, a solar day. Yet the length of the human daily cycle has been confirmed in countless experiments, for instance, with subjects in an environment such as a cave, where levels of illumination are kept constant for weeks on end. In each such case, the subject's body clock adopts a 25-hour cycle.

POSSIBLE EXPLANATIONS FOR THE 25-HOUR CYCLE.

One might suggest that the length of the cycle has something to do with the fact that Earth's rate of rotation has changed, as indeed it has. But the speed of the planet's rotation has slowed, because — like everything else in the universe — it is gradually losing energy. (This is a result of the second law of thermodynamics.)

About 650 million years ago, long before humans or even dinosaurs appeared on the scene, Earth revolved on its axis about 400 times in the interval required to revolve around the Sun. This means that there were 400 days in a year. By the time Homo sapiens emerged as a species about two million years ago, days were considerably longer, though still shorter than they are now. This only means that the 25-hour human body clock would have been even less compatible with the length of a day in the distant past of our species.

One possible explanation of the 25-hour body clock is the length of the lunar day, or the amount of time it takes for the Moon to reappear in a given spot over the sky of Earth. In contrast to the 24-hour solar day, the lunar day lasts for 24 hours and 50 minutes — very close in length to the natural human cycle. Still, the exact relationships between the Moon's cycles and those of the human body have not been established fully: the idea that lunar cycles have an effect on menstruation, for instance, appears to be more rumor than fact.

PEAKS AND TROUGHS.

On the other hand, circadian rhythms do mirror the patterns of the Moon's gravitational pull on Earth, which results in a high and low tide each day. Likewise, the human circadian rhythm has its highs and lows, or peaks and troughs. In the circadian trough, which occurs about 4:00 a.m., body temperature is at its lowest, whereas at the peak, around 4:00 p.m., it reaches a high. A person may experience a lag in energy after lunchtime, but usually by about 4:00 in the afternoon, energy picks up — a result of the fact that the body has entered a peak time in its cycle.

This fact, by the way, points up the great wisdom of a practice common in Spanish-speaking countries and some other parts of the world: siestas. The siesta devotes one of the least productive parts of the day, the post-lunch lag, to rest, so that a person is equipped with energy for the rest of the afternoon and early evening — at precisely the time when energy is at a high. To compensate for the time "lost" on napping, many such societies maintain a later schedule, with offices closing in the early evening rather than late afternoon and with evening meals served at about 9:00 p.m.

Note that even though our body clocks run on a 25-hour day, they readily adjust to the 24-hour world in which we live. As long as a person is exposed to regular cycles of day and night, the pineal gland automatically adapts to the length of a 24-hour solar day. If a person has been living in a sunless cave, with no exposure to daylight for a length of time, it would take about three weeks for the pineal gland to reset itself, but thereafter it would track with Earth time consistently.

The adjustment of the body clock is not simply a matter of sending signals for sleep and wakefulness. In fact, the pineal gland is at the center of a complex information network that controls sleep cycles, body temperature, and stress-fighting hormones. Hence the link that we noted earlier between body temperature and circadian rhythms: just as the body reaches its lowest temperature in the circadian trough, it also enters a period of extremely deep sleep.

REGULATING THE BODY CLOCK.

Tied in with these sleep patterns are many other bodily functions. For example, bodybuilders and others who work out with weights experience their greatest benefits not when lifting (which, in fact, tears muscles down rather than building them up) but when resting — and particularly when sleeping — after having worked out earlier in the day. Likewise, deep sleep is associated with growth, as we have noted. Furthermore, it appears that dreaming may be essential to the well-being of the psyche, providing an opportunity for the brain to "clean out" the signals and data it has been receiving for the preceding 16 hours of wakefulness.

Given these and other important functions associated with deep sleep, it follows that the maintenance of the body clock is of great importance to the health of the human organism. Fortunately, animals' brains are programmed to make adjustments of the body clock so as to accommodate the daily cycles of light and dark. We have discussed the means by which the human brain achieves this accommodation, but it is not the only animal brain thus equipped. "Bird brains" (quite literally) are similarly able to make an adjustment: whereas humans have a natural 25-hour clock, birds run on a 23-hour circadian cycle, but their pineal glands likewise assist them in adapting to the 24-hour solar day.

The brains of birds, humans, and other animals respond to environmental features known collectively as zeitgebers (German for "time givers"), which aid in the adjustment to the solar schedule. The most obvious example is the change from day to night, but there are other zeitgebers of which we are less aware in our ordinary experience. For example, Earth's magnetic field goes through its own 24-hour cycle, which subtly influences our biological rhythms.

Interfering with the Body Clock

In modern life humans often interfere with their own body clocks, either deliberately and directly or indirectly and by accident. On the one hand, a person may drink coffee to stay awake at night, but he or she also may experience a sleep disorder as a result of some other situation, which may or may not be the result of purposeful action. Examples of sleep disorders that are the by-product of other activities include jet lag as well as the malfunctioning of the body clock that often stems from recreational drug use.

The causes for interference with a person's body clock may be outside that person's control to one degree or another. Working at night, for instance, is a condition that almost never suits a human being, no matter how much a person may insist that he or she is a "night person." Nevertheless, a person may be required by circumstances, such as schedule, economic necessity, or job availability, to take a night job. Another example of interference with the body clock would be narcolepsy (a condition characterized by brief attacks of deep sleep) or some other condition that is either congenital (something with which a person is born) or symptomatic (a symptom of some other condition rather than a condition in and of itself).

WHITE NIGHTS.

At least one example of human experience involving interference with the body clock relates to conditions completely outside people's control. This is the situation of the "white nights" or "midnight sun," whereby regions in the extreme north — Russia, Alaska, and Scandinavia — undergo periods of almost constant daylight from mid-May to late July. (These are matched by a much less pleasant phenomenon: near constant darkness from mid-November to late January.)

During those times people often line their windows with dark material to make it easier to go to sleep in a world where the Sun is nearly as bright at 3:00 a.m. as it is at 3:00 p.m. The situation is even more pronounced in Antarctica, where researchers and adventurers may find themselves much closer to the South Pole than people in Saint Petersburg, Anchorage, or Oslo are to the North Pole.

In Antarctica the human population is much higher in the summer, a period that coincides with the depth of winter in the Northern Hemisphere, and scientists or mountaineers trekking through remote regions may be forced to sleep in tents that keep out the cold but let in the light. Usually, however, the rugged conditions of life near the South Pole involve such exertions that by nighttime people are ready to sleep, light or no light.

SOME SLEEP DISORDERS.

Few people ever get to experience the white nights, but almost everyone has suffered through a temporary bout of insomnia — a condition known specifically as transient insomnia. An unfortunate few suffer from chronic insomnia or some other sleep disorder. Insomnia, the inability to go to sleep or to stay asleep, is one of the two most common sleep disorders, the other being hypersomnia, or excessive daytime sleepiness.

Transient forms of insomnia are usually treatable with short-term prescription drugs, but more serious conditions qualify as actual disorders and may require long-term treatment. These disorders may have as their cause drug use (either prescription or illegal) as well as medical or psychological problems. Among the most common of these more specialized disorders is apnea, the regular cessation of breathing whose most noticeable symptom is snoring.

Apnea, which affects a large portion of the United States population, is a potentially very serious condition that can bring about suffocation or even death. More often its effects are less dramatic, however, and manifest in hypersomnia, which is a result of lost sleep due to the fact that the sufferer actually is waking up numerous times throughout the night.

At the other extreme from apnea, in terms of prevalence among the population, is Kleine-Levin syndrome, which typically affects males in their late teens or twenties. The syndrome may bring about dramatic symptoms that range from excessive sleepiness, overeating, and irritability to abnormal behavior, hallucinations, and even loss of sexual inhibitions. Added to this strange mix is the fact that Kleine-Levin syndrome typically disappears after the person reaches the age of 40.

JET LAG.

There are numerous classes of sleep disorders, among them circadian rhythm disorders — those related to jet lag or work schedules. As we have seen, the pineal gland can adjust easily from a natural 25-hour cycle to a 24-hour one, but it can do so only gradually, and it cannot readily adapt to sudden changes of schedule, such as those brought about by air travel.

Jet lag is a physiological and psychological condition in humans that typically includes fatigue and irritability it usually follows a long flight through several time zones and probably results from disruption of circadian rhythms. The name is fitting, since jet lag is associated almost exclusively with jets: traveling great distances by ship, even at the speeds of modern craft, allows the body at least some time to adjust.

Older modes of travel were too slow to involve jet lag for this reason, the phenomenon is a relatively recent one. The only people who manage to experience jet lag without riding in a jet are those traveling in even faster craft — that is, astronauts. An astronaut orbiting Earth in a space shuttle experiences rapid shifts from day to night if manned vessels ever go out into deep space, scientists will face a new problem: assisting the adjustment of circadian cycles to that sunless realm.

On a much more ordinary level, there is the jet lag of people who travel from the United States East Coast to Europe or between the East Coast and West Coast of the United States. The worst kinds of jet lag occur when a person flies from west to east across six or more time zones: anyone who flies to Europe from the East Coast is likely to spend much of the first day after arrival sleeping rather than sightseeing. Thereafter, it may take up to ten days (usually as long as or longer than most European vacations) for the body to adjust fully.

By contrast, someone who has flown from the East Coast to the West Coast feels unexpected energy. The reason is that when it is 6:00 a.m. in the Pacific time zone, it is 9:00 a.m. in the eastern time zone, to which a person's body clock (in this particular scenario) is still adapted. Therefore, at 6:00 in the morning, the newly arrived traveler will feel as good as he or she would normally feel at 9:00 a.m. back east. Conversely, at 9:00 p.m. in the west, it is midnight in the east. This means that the traveler is likely to feel tired long before his or her ordinary bedtime.

There are steps one can take to avoid, or at least minimize the effects of, jet lag. One is to ensure a regular sleep schedule prior to traveling, so as to minimize the effects of sleep deprivation, if the latter does occur. It is even better if one can, in the days prior to leaving, adopt a schedule adjusted to the new time zone. For example, if one were traveling from the East Coast to California, one would start going to bed three hours earlier, and rising three hours earlier as well. Changing eating habits in the days prior to departure may also help. Some experts on the subject recommend a four-day period in which one alternates heavy eating (days one and three) and very light eating (days two and four.) It is believed that high-protein breakfasts stimulate the active, waking cycle, while high-carbohydrate evening meals stimulate the resting cycle conversely, depriving the liver of carbohydrates may prepare the body clock to reset itself.

ON THE NIGHT SHIFT.

At least the body does adjust to jet lag on the other hand, it may never become accustomed to working a night shift. If you stay up all night studying for a test, you will find that around 4:00 a.m. you hit a "lull" when you feel sleepy — and because of the lowered temperature at the circadian trough, you also feel cold. You might assume that this situation would improve if you worked regularly at night, but the evidence suggests that it does not.

As long as a person lives in a sunlit world of 24-hour solar days, the body clock remains adapted to that schedule, and this will be true whether the person is at home and in bed or at work behind a desk or counter during the hours of night. In other words, the person always will hit the circadian trough about 4:00 a.m. This is one of the reasons why most people find the idea of working at night so unattractive, even though it is clear that in our modern society some night-shift positions are essential.

People who have offices in their homes may find it beneficial to work at late hours, when the phone is not ringing and the world is quiet, but the "extra time" gained by working at night ultimately is counterbalanced by the body's reaction to changes in its biological rhythms. Such is also the case with night-shift workers, who never really adjust to their schedules even after years on the job.

There is such a thing as a "night person," or someone with a chronic condition known as delayed sleep phase syndrome. A person with this syndrome is apt to feel most alert in the late evening and night, with a corresponding lag of energy in the late mornings and afternoons. Even so, given the role of sunlight in governing the body clock, the condition does not really lend itself to regular night work but rather merely causes a person to experience problems adapting to the schedule maintained by most of society. One possible means of dealing with this problem is to go to bed three hours later than would be normal for an ordinary 9-to-5 schedule, and wake up three hours later as well unfortunately, that is not practical for most people. Another treatment applied with success is exposure of a person to artificial, high-intensity, full-spectrum light, which augments the effect of sunlight, between the hours of 7:00 and 9:00 a.m.

COLONIZING THE NIGHT?

In this vein it is interesting to note that some of the optimistic predictions made in 1987 by Murray Melbin in his fascinating book Night as Frontier: Colonizing the World After Dark have not come to pass. Melbin, who explains circadian rhythms and the body clock in a highly readable and understandable fashion, makes a brilliant analysis of the means by which industrialized societies have extended their daily schedules into the nighttime hours. Thus, to use his analogy, such societies have "colonized" the night.

Until the invention in 1879 of the first successful incandescent lamp by the American inventor Thomas Edison (1847-1931), activity at night was limited. Torches, crude lamps, and candles in ancient times metal lamps in the Middle Ages and the various oil-burning lamps that applied the glass lantern chimney devised in 1490 by the Italian scientist and artist Leonardo da Vinci (1452-1519) all made it possible for a person to read at night and to perform other limited functions. After their introduction in the nineteenth century, street lamps in London, the first of their kind, made the streets safe for walking at late hours, but travel, large gatherings, and outdoor work after dark remained difficult before the advent of electric light.

Since 1879 the Western world has indeed "colonized" the night with all-night eateries, roads that are never free of traffic, and round-the-clock entertainment on radio, TV, and now the Internet. There are even hardware stores open all night in some major cities. Certainly today there are more gas stations, restaurants, television programs, and customer-service telephone lines that operate 24 hours than there were in 1987, when Melbin wrote his book, but it is unlikely that Americans will ever fully "colonize" the night in the thoroughgoing fashion that their ancestors colonized the New World. An example of the limits to night colonization is in air travel.

Before the events of September 11, 2001, when terrorists crashed hijacked planes into the World Trade Center in New York City and the Pentagon in Washington, D.C., the burden on America's airports had become almost unbearable. The concourses of Hartsfield International in Atlanta, Georgia, the world's busiest airport, were a nonstop melee of people, luggage, and noise, as travelers fought to change flights or pick up their bags. One obvious solution to the problem would have been to adopt a round-the-clock airport schedule, with flights regularly leaving at 3:00 or 4:00 in the morning.

No airport rushed to enact such a measure, however, and after September 11 heightened security concerns made it unlikely that any facility would adopt a 24-hour schedule, with the additional security threats it entailed. For a time at least, the volume of air traffic decreased dramatically, but even as it climbed back up in the months after the terrorist attacks, airports continued to operate on their ordinary schedules. The reason appears to be the difficulty of persuading people to adjust to a late-night schedule — that is, finding enough people willing to fly in the middle of the night and enough baggage handlers and ticket agents willing to service them. There are, it seems, limits to the extent to which nighttime can be colonized.

Other Examples of Biological Rhythms

Although the circadian rhythms of sleep and wakefulness are particularly important examples of biological cycles, they are far from the only ones. Not all rhythms, in fact, are circadian. Some are ultradian, meaning that they occur more than once a day. Examples include the cycles of taking in fluid and forming urine as well as cell-division cycles and cycles related to hormones and the endocrine glands that release them. For instance, the pituitary gland in the brain of a normal male mammal secretes hormones about every one to two hours during the day.

The overall cycle of sleeping and waking is circadian, but there is an ultradian cycle within sleep as the brain moves from drowsiness to REM (rapid eye movement, or dream, sleep) to dozing, then to light and deep sleep, and finally to slow-wave sleep. Over the course of the night, this cycle, which lasts about 90 minutes, repeats itself several times. Among the functions affected by this cycle are heart rate and breathing, which slow down in deep sleep. Additionally, heartbeat and respiration are themselves ultradian cycles of very short duration.

MENSTRUATION AND OTHER INFRADIAN CYCLES.

In contrast to the ultra-quick ultradian cycles of the beating heart and the lungs' intake and outflow of oxygen, there are much longer infradian, or monthly, cycles. By far the most common is menstruation, which begins when a female mammal reaches a state of physical maturity and continues on a monthly basis until she is no longer able to conceive offspring.

When she becomes pregnant, the menstrual cycle shuts down and, in some cases, does not resume until several months after delivery of the offspring. Assuming she is in good health, the human female will experience fairly regular menstrual periods at intervals of 28 days. Among human females, it has long been known that the menstrual cycles of women who live or work in close proximity to one another tend to come into alignment. For example, college girls on the same floor in a dormitory are likely to share menstrual cycles.

The reasons for this alignment of menstrual cycles are not completely understood. Nor is the cause of the 28-day cycle evident. If it were the result of the Moon's cycles, all women on Earth would have menstrual cycles that last 29.5 days, which is how long it takes the Moon to travel around Earth. Furthermore, if there were a clear connection between the Moon and menstruation, the periods of all menstruating females on Earth would be aligned with the Moon's phases. Neither of these, of course, is the case.

CIRCANNUAL CYCLES.

Longer still than infradian cycles, circannual cycles, as their name suggests, take a year to complete. Among them is the cycle of dormancy and activity marked by the hibernation of certain species in the winter. There are also certain times of the year when animals shed things — fur, skin, antlers, or simply pounds. Likewise, at some points in the year animals gain weight.

People are affected strongly by the seasonal changes associated with the circannual cycle. There is almost no person who lives in a temperate zone (that is, one with four seasons) who is not capable of calling strong emotions to mind when imagining the sensations associated with winter, spring, summer, or fall. Some sensations, however, are better than others, and though there can be negative associations with spring or sum mer, by far the season most likely to induce ill effects in humans is winter.

The thirteen weeks between the winter solstice in late December and the vernal equinox in late March have such a powerful impact on the human psyche that scientists have identified a mental condition associated with it. It is SAD, or seasonal affective disorder, which seems to be related to the shortened days (and thus, ultimately, to the altered circadian rhythm) in winter-time.

As we have noted, the body responds to the onset of night and sleep by the release of melatonin, but when darkness lasts longer than normal, melatonin secretions become much more pronounced than they would be under ordinary conditions. The result of this hormone imbalance can be depression, which may be compounded by other conditions associated with winter. Among these conditions is "cabin fever," or restlessness brought about by lengthy confinement indoors. An effective treatment for SAD is exposure to intense bright light.

Studying Biological Rhythms

Treatment of SAD is just one example of the issues confronted by scientists working in the realm of chronobiology, a subdiscipline devoted to the study of biological rhythms. Naturally, a particularly significant area of chronobiological study is devoted to sleep research. The latter is a relatively new field of medicine stimulated by the discovery of REM sleep in 1953. In addition to studying such disorders as sleep apnea, sleep researchers are concerned with such issues as the effects of sleep deprivation and the impact on circadian rhythms brought about by isolation from sunlight.

Note that the scientific study of biological rhythms has nothing to do with "biorhythms," a fad that peaked in the 1970s but still has its adherents today. Biorhythms are akin to astrology in their emphasis on the moment of a person's birth, and though biorhythms have a bit more scientific basis than astrology, that in itself is not saying much. As we have seen, biological rhythms do govern much of human life, but the study of these rhythms does not offer special insight into the fate or future of a person — one of the principal claims made by adherents of biorhythms. As with all pseudosciences, belief in biorhythms is maintained by emphasizing those examples that seem to correlate with the theory and ignoring or explaining away the many facts that contradict it.

An example of scientific research in chrono-biology and related fields is the work of the psychologist Stephany Biello at Glasgow University in Scotland, who in June 2000 announced findings linking the drug, ecstasy, to long-term damage to the body clock. As with LSD and many another drug, ecstasy plays havoc with serotonin and may exert such a negative impact on the pathways of serotonin release in the pineal gland that it permanently alters the brain's ability to manufacture that vital hormone. Thus the drug, which induces a sense of euphoria in users, can induce serious sleep and mood disorders as well as severe depression.

WHERE TO LEARN MORE

Center for Biological Timing (Web site). <http://www.cbt.virginia.edu/>.

Circadian Rhythms (Web site). <http://www.bio.warwick.ac.uk/millar/circad.html>.

Hughes, Martin. Bodyclock: The Effects of Time on Human Health. New York: Facts on File, 1989.

Melbin, Murray. Night as Frontier: Colonizing the World After Dark. New York: Free Press, 1987.

Orlock, Carol. Inner Time: The Science of Body Clocks and What Makes Us Tick. Secaucus, NJ: Carol Publishing Group, 1993.

Rose, Kenneth Jon. The Body in Time. New York: John Wiley and Sons, 1988.

Waterhouse, J. M., D. S. Waters, and M. E. Waterhouse. Your Body Clock. New York: Oxford University Press, 1990.

Winfree, Arthur T. The Timing of Biological Clocks. New York: Scientific American Library, 1987.


New algorithms: diagnosis and classification

Historically, the diagnosis of gout focused on the acute arthritic state and did not consider the potential for chronicity. Proposed classification criteria demonstrated suboptimal sensitivity and specificity, were never validated, or did not incorporate advances in imaging modalities. In 2015, the American College of Rheumatology (ACR) and the European League Against Rheumatism (EULAR) jointly published validated classification criteria that encompass acute and chronic aspects of gout, recent imaging advances, and weighting to maximize sensitivity and specificity 14 . These criteria permit improved enrollment of patients with gout into studies and provide a structure that can inform clinical diagnosis. Under the new algorithm, the documented presence of monosodium urate (MSU) crystals in a symptomatic joint or tophus is a sufficient criterion for classifying gout. If these criteria are not met, a scoring system is applied that reflects characteristics of acute and chronic gout, including recent advances in imaging (see Table 1 as well as the “New views: imaging” section below). A score of at least 8 indicates gout 14 . A convenient web-based “gout classification calculator” based on these criteria has been released by the University of Auckland in New Zealand ( http://goutclassificationcalculator.auckland.ac.nz/).

Table 1.

CriteriaCategoryScore
Pattern of joint/bursa involvementAnkle or midfoot1
First metatarsophalangeal
joint
2
Episodic symptoms
• ਎rythema
•  Pain or tenderness
• ਏunctional disability
One symptom1
Two symptoms2
Three symptoms3
Time course (at least two present):
•  Time to maximal pain is less than
24 hours
•  Resolution of symptoms in not more
than 14 days
• ਌omplete resolution (to baseline)
between episodes
One typical episode1
Recurrent typical
episodes
2
Clinical evidence of tophusPresent4
Serum urateτ mg/dL𢄤
6𠄸 mg/dL2
8–㰐 mg/dL3
� mg/dL4
Synovial fluid analysisMSU negative𢄢
Imaging evidence of serum urate
deposition in symptomatic joint or bursa:
•  Ultrasound: double-contour sign or
• ਍ual-energy computed tomography:
serum urate deposition
Present (either modality)4
Imaging evidence of gout-related
damage:
• ਊt least one erosion present in
conventional radiography of hands
or feet or both
Present4

Adapted from the American College of Rheumatology/European League Against Rheumatism 2015 Gout Classification Criteria 14 .


Conclusions

The longstanding sympatry of crops and their wild relatives, along with some level of reproductive compatibility, may mean that many crops exist as part of a wild weedy–domesticated complex with occasional or even persistent gene flow ( de Wet and Harlan 1975 Ellstrand et al. 1999, 2010 Ellstrand 2003 Zizumbo-Villarreal et al. 2005). Eggplant certainly seems to fit into this category with evidence for introgression and the transfer of adaptive alleles between wild and domesticated populations. This highlights the potential for crop-wild gene flow to be undetected without genome-wide analysis, and could potentially lead to the origin of feral weeds, if it has not already.

Not only does this have implication for weed evolution (e.g., economic loss, Ellstrand et al. 2010) and for the potential release of transgenic crops, including eggplants ( Jayaraman 2010), this scenario also represents an exciting and tractable model to study adaptive evolution and the origin and transfer of adaptive traits in parallel. Once the eggplant genome becomes available, it will be possible to detect the size of introgressed regions, answering whether this is historic admixture (such as between Neanderthals and humans, Vernot and Akey 2014) or recent and continual crop-wild gene flow (e.g., Arias and Rieseberg 1994 Mandel et al. 2016). This will also be fruitful in identifying candidate genes within introgressed regions that may correspond to adaptive phenotypes for a food secure future ( Godfray et al. 2010 Poppy et al. 2014).


Forensic Sciences 2021

LabRoots is pleased to announce our 3rd Forensic Sciences Virtual Event on May 5, 2021. Join us for this free, one-day event on May 5th as we discuss DNA, chemistry, toxicology, digital forensics, and the important marginal area where science and medicine interact with the law.

This year's event includes the following tracks:

Forensic Research: From Evidence to Testimony

Advances and Challenges in Forensic Research

Application of Analytical Methods in Forensic Sciences

Our virtual conference allows you to participate in a global setting with no travel or cost to you. The event will remain open for 2 years from the date of the live event, and the webinars will be available for unlimited on-demand viewing. This virtual conference also offers increased reach for the global microbiology community with a high degree of interaction through live-streaming video and chat sessions.

Like the 2020 conference, this event will be produced on our robust platform, allowing you to watch, learn and connect seamlessly across all desktop or mobile devices. Equipped with gamification and point system, you can now move around the entire event, earning points for a chance to win one of LabRoots' most popular T-shirts.


Call for Posters &mdash Virtual poster sessions offer the opportunity to present data to a global audience via a PDF poster and video summary, and discuss results with interested colleagues through email. Plan now to have your poster included in the 2021 Forensic Sciences Virtual Event. Submission is free. Submit your abstract here.

Continuing Education &mdash LabRoots is approved as a provider of continuing education programs in the clinical laboratory sciences by the ASCLS P.A.C.E. ® Program. By attending this event, you can earn 1 Continuing Education credit per presentation for a maximum of 30 credits.

Use #LRforensics to follow the conversation!

Nigel Brown, PhD, FIBMS, FRCPath, MRSC

Francis Diamond

Colleen Fitzpatrick, PhD

Henry C. Lee, PhD

Nicole M.M. Novroski, PhD

Jennifer Pechal, PhD

Katelynn Perrault, PhD

David Rivers, PhD

Cyril Wecht, M.D., J.D.

Brooke Weinger Kammrath, Ph.D., D-ABC

Frank Wendt, PhD

Karen Woodall, PhD

The increase in the number and diversity of drugs of abuse is a major concern it also presents significant challenges for forensic laboratories who are involved in the analysis of seized substances and are under time pressure to produce results quickly. Most forensic drug chemistry laboratories follow SWGDRUG recommendations which stipulate that two independent techniques should be used to analyze a drug sample. A typical workflow may include colorimetric or TLC presumptive analysis followed up by a more selective method such as GC/MS . However, for many drugs, colorimetric tests are not available or result in a high rate of false positives and TLC analysis can be time consuming. Infrared screening methods are also commonly used but these can also produce inconclusive results when analyzing drug mixtures. There is a need for a quick means of analysis to provide a more detailed screen result prior to a confirmatory test. A technique with high selectivity and rapid analysis could potentially improve the workflow for seized drug analysis. Samples are analyzed directly, with minimal sample preparation and collision induced fragmentation is used to generate both precursor and product ions utilizing in-source fragmentation. The data is processed by Live ID software which facilitates near-real-time matching of acquired data to a spectral library and determines the average match factor. Results are provided in around one minute per sample. The RADIAN ASAP uses the specificity of mass spectrometry for improved drug identification and enables a rapid triage of samples and depending on the nature of the sample can be considered either a Category A or B technique when following SWGDRUG guidelines. This presentation will discuss the design features of the RADIAN ASAP mass spectrometer system that make it particularly suitable for seized drug analysis. An explanation of the ionization process will be discussed, and examples of data will be presented from both &lsquopure&rsquo drugs and drug mixtures.

1. Learn how direct ionization analysis can aid the forensic chemist

2. Learn how the RADIAN ASAP system characterizes seized drug substances

3. Learn how the RADIAN ASAP system minimizes sample preparation required for analysis

This talk will start with an introduction to a number of illicit benzodiazepines that have appeared in North East England over the last couple of years. This will be followed by an overview of the GABA-A receptor and its possible central role in the toxic effects of a number of CNS depressant drugs. Attention will move to pregabalin (and gabapentin) and their toxicity, which is becoming more apparent as time progresses. This will include some details of their receptor in the CNS and their modulatory effects on other CNS transmitter systems. I will also include some details of the detection method (using HRMS and LC-MS/MS) and planned improvements for this year.

1. The variety of benzodiazepine molecules in circulation and their effects

2. Interaction of the gabapentinoids with other drugs affecting the CNS

3. Benzodiazepine and gabapentinoid receptor basics

The success in the solving cases is based upon a system that emphasizes teamwork, from scene recognition, collection, preservation, to laboratory examination, identification , individualization, and to court testifying, interpretation, and reconstruction of the forensic evidence. If potential physical evidence is not recognized, collected, or properly preserved, the value of the physical evidence will forever lost. Numerous day-to-day and high-profile cases have demonstrated the harsh reality that, despite the availability of current analytical technologies, specialized equipment, and sophisticated forensic testing. However, forensic scientists do not usually decide the extent of forensic evidence analyzed or its use in a civil or criminal case. Ultimately, it is the police officers, detectives, crime scene technicians that usually complete the crime scene search and begin the forensic investigation stages of an investigation. After the investigation is completed and during the pretrial or initial litigation stages, the prosecution and defense counsels determine which physical evidence will be utilized. forensic scientist must be prepared not only to present evidence to the court, but also to explain how the evidence was tested, what techniques were used, the error rates and standards of operation, and anything else the judge might need to know. As always, ethics and truthfulness are of utmost importance when testifying under oath in a court of law. Forensic scientists must present all the facts to the court in an objective and fair manner. During the trial or adjudication stages, the judge determines the admissibility of the forensic evidence. There is no guarantee that any of these parties who are part of the evidentiary process will sufficiently understand the potential of forensic evidence. It is also a rare, but possible, reality that some of these individuals possess questionable integrity or character. It is also possible that the direct or crossing examination of Forensic witness are unfair and unreasonable types personal attacks. Therefore, training, education and standardization of forensic investigation are all critical for the improvement of forensic services in judicial system. As a greater number of police, forensic scientists, attorneys, Judges and general public acquire correct information about the value and limitation in forensic science and new technologies, the quality of examination of forensic evidence should continue to improve. The result would be to make forensic science even more valuable and to maintain the high quality of justice that this society deserves.

1. Knowledge of history, development, and the status of forensic evidence

2. Understand the Scientific Issues, Expert Issues, Discovery issues and the Admissibility Issues of forensic evidence

3. Learn the Legal Standards, role of evidence, Frye vs Dauber and Federal Rule702

4. Learn the scientific theory, technique falsifiable, refutable, by using empirical testing methods

5. Learn the common tactics used by lawyers to attack scientific expert and the serving skill in court

Genealogist have long supported the legal system through probate work, heir searching, and by identifying next-of-kin of servicemen. Even though DNA was first introduced to genealogists over 20 years ago, it has only been recently that genetic genealogy has been applied to forensic identification. The first known case successfully solved using forensic genealogy was the 1992-1993 Phoenix Canal Murders, solved in 2015 by comparing a Y-profile from a crime scene to online public genetic genealogy databases to obtain the surname of the killer. This was possible because the Y-STR markers used by the genetic genealogy community for researching family pedigrees were the same as those developed for forensic Y-STR amplification kits. More recent cases like Buckskin Girl and the Golden State Killer have been solved using autosomal SNP testing, similar to what is provided by direct-to-consumer (DTC) companies such as Ancestry.com. Because DTC companies refuse to accept forensic cases, it has been necessary to create autosomal SNP datasets using private laboratories, and to search for matches using Gedmatch, a public genetic genealogy database. This has created issues for the both the forensic and genealogical communities. SNP testing was borrowed from the biomedical industry so that forensically accredited laboratories are unable to produce the required SNP data. This has raised legal questions concerning the use of private labs and crowd-sourced genealogical data that is not as the result of peer-reviewed scientific research. Genetic genealogists are also divided in their opinions about law enforcement use of their personal data. This talk discusses how genetic genealogy developed into a tool for forensic identification. It also presents its limitations and capabilities, along with the more important issues faced by the legal system and the genealogical community in their efforts to work together to balance the privacy of the individual with the requirements of public safety.

1. To understand the development of genetic genealogy as a forensic tool

2. To understand the basic forensic genetic genealogy identification methodology

3. To review practical forensic genetic genealogical case studies

Research on decomposition odor has evolved significantly since the first introduction of this concept in the literature in 2004. This work is foundational to understanding several areas of forensic science, including insect attraction to human remains, cadaver-detection canine response, disaster recovery, missing persons search strategies, and more. Some researchers have also commented on the potential to use this information for developing handheld decomposition sensors, for estimating postmortem interval, and to identify cadaveric status in a non-invasive way at autopsy. Preliminary work in this area focused predominantly on identifying compounds at specific time points during decomposition. Since then, studies have taken a more comprehensive look at the factors influencing the development of decomposition odor over the entire course of decomposition. This talk will provide an overview of study designs used in preliminary work on decomposition odor and the progression of study designs to date. The progression of analytical strategies from one-dimensional methods to multidimensional separations will also be discussed, highlighting the benefits and drawbacks of different techniques based on the study hypothesis. Finally, the major limitations placed on the progression of this research during the COVID-19 pandemic will also be discussed, including challenges such as mobility, supplies, facility access, funding access, personnel, and technical support.

1. Explain how decomposition odor is used in forensic science

2. Identify the components of effective research design for decomposition odor research

3. Evaluate the choice of analytical technique based on a research goal

The detection and analysis of traces for the purpose of providing clues in criminal investigations has a rich history that dates back to the mid-18th Century. Traces are remnants of past events that provide both associative and investigative information, however the quality and quantity of information that can be obtained from their interrogation depends on both the tools (or instruments) being used and the expertise of the examiner. The traditional tool of light microscopy provides valuable physical and optical properties of a material that can aid in its identification and comparison. When paired with instrumental methods for chemical analysis, most notably in the forms of infrared and Raman microspectroscopy, a plethora of valuable information can be obtained from a range of traces. Further, new developments in instrumentation, such as morphologically-directed Raman spectroscopy (MDRS), particle-correlated Raman spectroscopy (PCRS) and simultaneous Optical Photothermal Infrared & Raman (O-PTIR+R) microspectroscopy, have enabled more information to be obtained from the analysis of complex samples than could be previously obtained. This presentation will demonstrate the interrogating power of these new tools for the analysis of a range of samples of forensic interest, including illicit and counterfeit drugs, soils, gunshot residue (GSR), hoax white powders, and automotive paint.

1. Describe the variety of traces that deliver valuable associative and investigative information for forensic inquiries

2. Discuss new developments in instrumentation, including MDRS, PCRS and O-PTIR+R.

3. Understand the added value provided when traces are analyzed with MDRS, PCRS and O-PTIR+R

Every day in America nearly seven thousand people die. That&rsquos about two and a half million deaths per year. When a death is violent, unexpected, medically unattended, or suspicious &ndash which occurs in about one-quarter to one-third of cases &ndash it becomes necessary to determine whether the death was due to foul play. A specially trained physician, known as medical examiner or forensic pathologist, is asked to perform an autopsy to inspect the body externally and internally, and to issue a report that can close or open it to the legal system for criminal prosecution. Despite the many death cases to probe, fewer than five hundred forensic pathologists are trained and certified in the U.S.

A. Suspected Crime Scene Investigation

  1. Forensic Pathologist
  2. Criminalist
  3. Forensic Photographer
  4. Homicide Detective

B. Identification, Collection, and Preservation of Evidence

C. Performance of Autopsy by Forensic Pathologist Gross and Microscopic Examination of Body Organs and Tissues Autopsy Photographs Collection of relevant biological specimens for toxicological analysis and other studies

D. Correlation of Autopsy Findings with Medical Records, and Police Investigative Reports

E. Completion of Post-Mortem Protocol

Assessing the manner of death is the next step for the forensic pathologist, who selects one of five choices: homicide, suicide, accident, natural, or undetermined. This declaration is prominently displayed on the autopsy report and becomes part of the official record that family members, and often the news media, have access to. While it may take a few weeks for all the test results to come in, the autopsy doctor may share preliminary information with law enforcement, even though the office of the medical examiner should wholly independent from police and prosecutors. Letting authorities know that someone died at the hands of another person ensures public safety by allowing a criminal investigation to be started swiftly. If someone is arrested for murder, prosecutors and defense attorneys will receive the completed autopsy information prior to the defendant&rsquos trial, and the forensic pathologist will be a critical witness in the courtroom.

1. To understand what official medicolegal investigation is about

2. Identify how these procedures are accomplished

In this session, a review of the impact of COVID-19 on the Forensic Science Program at the University of Toronto will be explored. Challenges for both undergraduate and graduate education will be discussed within the context of general forensic programming at the University/College level. Looking to the future, solutions-based ideas will be presented with an opportunity for participant reflection and participation as it relates to institutional-specific approaches to forensic pedagogy and academic research.

1. Understand unique agency/institutional challenges in pedagogy and research for forensic science programs

2. Discover how UTM has pivoted to increase virtual/remote learning in an innovative and interactive way in order to compensate for lack of on-campus experiences

3. Explore next steps and future directions for valuable educational approaches as the undergraduate and graduate levels for forensic science education and research

When violent crimes are committed, the crime scene is usually covered with blood. Blood may be shed when the person is standing, sitting, or on the ground. The weapon that caused blood to be released influences the shape of the bloodstains that form, as do many other factors. Much more obscure are the &lsquobloodstains&rsquo dispersed about a crime scene that actually are not bloodstains at all they are insect stains or artifacts. Stains produced by insects, usually flies, are created by a multitude of mechanisms at least ten that we know of. When intermixed with real bloodstains, fly stains can be incredibly challenging if not impossible to recognize. Why does that matter? For one, patterns of blood can reveal directionality, especially important during reconstruction of the blood shed event. The size and shape of bloodstains reveal details about how the stains were created. Blood also contains DNA of either the victim or assailant. Fly stains can provide false information about any of these parameters if confused as true bloodstains. It is even possible that insect artifacts introduce DNA to a crime scene from someone not associated with the crime. Thus, it is absolutely essential to be able to distinguish insect stains from true bloodstains. This conference is focused on pattern evidence created by flies at crime scenes. We will examine how and when fly stains are formed on different materials, and within the context of the types of tissues and fluids associated with a corpse. We will also discuss how to recognize the presence of insect artifacts and tools currently available for crime scene investigators to reliably identify insect stains under different contexts. Ultimately, the information will help to recognize the presence of insect artifacts and avoid confusion with true bloodstains.

1. Explain the difference between insect stains and fly artifacts.

2. Understand the challenges in detection and distinction of fly stains from true bloodstains.

3. Describe the methodology available for recognition of fly stains at crime scenes.

Short tandem repeats (STRs) are gold-standard genetic loci used for source attribution of evidentiary material in legal matters. Their power lies in their high heterozygosity and large allele spread. Commercially available STR assays leverage linkage disequilibrium (LD)-independent STRs (i.e., alleles at one STR are not correlated with alleles another STR) to calculate the product of allele frequencies in a population. The result of the product rule is a random match probability (RMP) describing the rarity of the observed alleles in an STR profile in the context of some reference population. In collaboration with Dr. Nicole Novroski, our team is addressing two major STR limitations using recent advances in genomics and computational biology. First, the size of STR amplicons prohibits genotyping from degraded substrates such as human remains &ndash we leveraged the LD structure of the genome to predict unobserved forensically relevant STR genotypes from identity-informative single nucleotide polymorphisms (SNPs). We present a bioinformatic workflow for predicting STRs and the resulting population genetic parameters to evaluate reliability of predicted genotypes. Second, mixed STR profiles are a major obstacle to the forensic DNA practitioner due, in part, to the robustness and advances of modern STR assays to detect low level contributors. We simulated 200 populations modeled from four United States population groups using the forensim R package resulting in 155,400 individuals (27 STRs per individual). Across 200 populations, we frequently observed evidence of residual population stratification detected by Hardy-Weinberg equilibrium deviations and substantial allele frequency differences. In 2,400 mixed DNA profiles (2-, 3-, and 4-person) we show that variation in allele frequency across simulated populations significantly altered likelihood ratios (LRs) from 2-person mixtures but did not influence 3- or 4-person mixtures. Our findings demonstrate that studying STRs in a forensic context can be (i) cost-effective for early career researchers, (ii) extremely well powered due to almost no upper limit for sample size, and (iii) provides novel insights into the behavior of STR mixtures.

1. Describe two major limitations of loci used for source attribution (e.g., SNPs and STRs)

2. Compare and contrast the benefits and limitations of simulated data in forensic science research

3. Explain how population genetics concepts apply to forensic DNA casework and research

The postmortem microbiome is an emerging field in forensic science with broad application for death investigation (e.g., time since death). While the foundation for forensic microbiology began in the early 2000s, resulting from bioterrorism threats, the expansion of using entire microbial communities (the microbial organisms, their genes, and their gene functions) in a forensic context began in the early 2010s. Improvements in molecular approaches over the past twenty years to study the microbiome in medical fields have revealed the power and potential use of microbial diversity as a means to explore the associations of microorganisms with human health of living individuals. Researchers within forensic sciences adopted these technologies to determine if and how the postmortem microbiome could be used to aid death investigation. The importance of the human microbiome and its function in health during life is well known. However, few studies characterize the dynamics of the postmortem human microbiota from samples collected during real-world death investigation. Here, we discuss the on-going results from an extensive database of microbial samples with a focus on a major metropolitan urban city. Microbial taxonomic profiles from routine death investigation cases were generated through targeted amplicon sequencing (16S rRNA). The goal of this project is to provide a robust dataset of postmortem microbial communities. In the future, these data will advance death investigation by identifying specific microbial taxa or signatures of importance for application in forensic sciences and beyond.

1. Define and discuss the variability of human microbiomes after death, as determined from samples collected during routine death investigation.

2. Use bioinformatic tools to identify potential signatures of postmortem microbiomes associated with circumstances of death.

3. Discuss the future of postmortem microbiomes in forensic sciences and other applied sciences, such as public health.

The increasing use of prescription and non-medical prescription opioids has created a significant public health crisis. The rising rate of opioid-related deaths across North America has led to a proliferation of opioids in forensic casework and increased the workload for forensic toxicology laboratories. In addition, increased availability of illicit fentanyl, fentanyl analogues including carfentanil, and other novel synthetic opioids such as U-47700, has created many challenges for forensic toxicology laboratories. Methods capable of detecting and quantitating new compounds must be established despite limited information on toxicologically relevant concentrations. The collaboration between forensic toxicology laboratories and public health organizations can be a critical tool in guiding public safety and public health decisions to help address this significant healthcare crisis. This presentation will provide an overview on the challenges the opioid crisis has had on forensic toxicologists and how the timely detection of novel opioids and/or regional trends in drug use can provide valuable information for public health organizations.

1. Define the opioid crisis

2. Identify different types of opioids that are observed in forensic toxicology laboratories

3. Explain challenges for forensic toxicologists in identifying and interpreting opioid concentrations in forensic casework

The increasing use of prescription and non-medical prescription opioids has created a significant public health crisis. The rising rate of opioid-related deaths across North America has led to a proliferation of opioids in forensic casework and increased the workload for forensic toxicology laboratories. In addition, increased availability of illicit fentanyl, fentanyl analogues including carfentanil, and other novel synthetic opioids such as U-47700, has created many challenges for forensic toxicology laboratories. Methods capable of detecting and quantitating new compounds must be established despite limited information on toxicologically relevant concentrations. The collaboration between forensic toxicology laboratories and public health organizations can be a critical tool in guiding public safety and public health decisions to help address this significant healthcare crisis. This presentation will provide an overview on the challenges the opioid crisis has had on forensic toxicologists and how the timely detection of novel opioids and/or regional trends in drug use can provide valuable information for public health organizations.

1. Define the opioid crisis

2. Identify different types of opioids that are observed in forensic toxicology laboratories

3. Explain challenges for forensic toxicologists in identifying and interpreting opioid concentrations in forensic casework

Short tandem repeats (STRs) are gold-standard genetic loci used for source attribution of evidentiary material in legal matters. Their power lies in their high heterozygosity and large allele spread. Commercially available STR assays leverage linkage disequilibrium (LD)-independent STRs (i.e., alleles at one STR are not correlated with alleles another STR) to calculate the product of allele frequencies in a population. The result of the product rule is a random match probability (RMP) describing the rarity of the observed alleles in an STR profile in the context of some reference population. In collaboration with Dr. Nicole Novroski, our team is addressing two major STR limitations using recent advances in genomics and computational biology. First, the size of STR amplicons prohibits genotyping from degraded substrates such as human remains &ndash we leveraged the LD structure of the genome to predict unobserved forensically relevant STR genotypes from identity-informative single nucleotide polymorphisms (SNPs). We present a bioinformatic workflow for predicting STRs and the resulting population genetic parameters to evaluate reliability of predicted genotypes. Second, mixed STR profiles are a major obstacle to the forensic DNA practitioner due, in part, to the robustness and advances of modern STR assays to detect low level contributors. We simulated 200 populations modeled from four United States population groups using the forensim R package resulting in 155,400 individuals (27 STRs per individual). Across 200 populations, we frequently observed evidence of residual population stratification detected by Hardy-Weinberg equilibrium deviations and substantial allele frequency differences. In 2,400 mixed DNA profiles (2-, 3-, and 4-person) we show that variation in allele frequency across simulated populations significantly altered likelihood ratios (LRs) from 2-person mixtures but did not influence 3- or 4-person mixtures. Our findings demonstrate that studying STRs in a forensic context can be (i) cost-effective for early career researchers, (ii) extremely well powered due to almost no upper limit for sample size, and (iii) provides novel insights into the behavior of STR mixtures.

1. Describe two major limitations of loci used for source attribution (e.g., SNPs and STRs)

2. Compare and contrast the benefits and limitations of simulated data in forensic science research

3. Explain how population genetics concepts apply to forensic DNA casework and research

The detection and analysis of traces for the purpose of providing clues in criminal investigations has a rich history that dates back to the mid-18th Century. Traces are remnants of past events that provide both associative and investigative information, however the quality and quantity of information that can be obtained from their interrogation depends on both the tools (or instruments) being used and the expertise of the examiner. The traditional tool of light microscopy provides valuable physical and optical properties of a material that can aid in its identification and comparison. When paired with instrumental methods for chemical analysis, most notably in the forms of infrared and Raman microspectroscopy, a plethora of valuable information can be obtained from a range of traces. Further, new developments in instrumentation, such as morphologically-directed Raman spectroscopy (MDRS), particle-correlated Raman spectroscopy (PCRS) and simultaneous Optical Photothermal Infrared & Raman (O-PTIR+R) microspectroscopy, have enabled more information to be obtained from the analysis of complex samples than could be previously obtained. This presentation will demonstrate the interrogating power of these new tools for the analysis of a range of samples of forensic interest, including illicit and counterfeit drugs, soils, gunshot residue (GSR), hoax white powders, and automotive paint.

1. Describe the variety of traces that deliver valuable associative and investigative information for forensic inquiries

2. Discuss new developments in instrumentation, including MDRS, PCRS and O-PTIR+R.

3. Understand the added value provided when traces are analyzed with MDRS, PCRS and O-PTIR+R

The postmortem microbiome is an emerging field in forensic science with broad application for death investigation (e.g., time since death). While the foundation for forensic microbiology began in the early 2000s, resulting from bioterrorism threats, the expansion of using entire microbial communities (the microbial organisms, their genes, and their gene functions) in a forensic context began in the early 2010s. Improvements in molecular approaches over the past twenty years to study the microbiome in medical fields have revealed the power and potential use of microbial diversity as a means to explore the associations of microorganisms with human health of living individuals. Researchers within forensic sciences adopted these technologies to determine if and how the postmortem microbiome could be used to aid death investigation. The importance of the human microbiome and its function in health during life is well known. However, few studies characterize the dynamics of the postmortem human microbiota from samples collected during real-world death investigation. Here, we discuss the on-going results from an extensive database of microbial samples with a focus on a major metropolitan urban city. Microbial taxonomic profiles from routine death investigation cases were generated through targeted amplicon sequencing (16S rRNA). The goal of this project is to provide a robust dataset of postmortem microbial communities. In the future, these data will advance death investigation by identifying specific microbial taxa or signatures of importance for application in forensic sciences and beyond.

1. Define and discuss the variability of human microbiomes after death, as determined from samples collected during routine death investigation.

2. Use bioinformatic tools to identify potential signatures of postmortem microbiomes associated with circumstances of death.

3. Discuss the future of postmortem microbiomes in forensic sciences and other applied sciences, such as public health.

Research on decomposition odor has evolved significantly since the first introduction of this concept in the literature in 2004. This work is foundational to understanding several areas of forensic science, including insect attraction to human remains, cadaver-detection canine response, disaster recovery, missing persons search strategies, and more. Some researchers have also commented on the potential to use this information for developing handheld decomposition sensors, for estimating postmortem interval, and to identify cadaveric status in a non-invasive way at autopsy. Preliminary work in this area focused predominantly on identifying compounds at specific time points during decomposition. Since then, studies have taken a more comprehensive look at the factors influencing the development of decomposition odor over the entire course of decomposition. This talk will provide an overview of study designs used in preliminary work on decomposition odor and the progression of study designs to date. The progression of analytical strategies from one-dimensional methods to multidimensional separations will also be discussed, highlighting the benefits and drawbacks of different techniques based on the study hypothesis. Finally, the major limitations placed on the progression of this research during the COVID-19 pandemic will also be discussed, including challenges such as mobility, supplies, facility access, funding access, personnel, and technical support.

1. Explain how decomposition odor is used in forensic science

2. Identify the components of effective research design for decomposition odor research

3. Evaluate the choice of analytical technique based on a research goal

When violent crimes are committed, the crime scene is usually covered with blood. Blood may be shed when the person is standing, sitting, or on the ground. The weapon that caused blood to be released influences the shape of the bloodstains that form, as do many other factors. Much more obscure are the &lsquobloodstains&rsquo dispersed about a crime scene that actually are not bloodstains at all they are insect stains or artifacts. Stains produced by insects, usually flies, are created by a multitude of mechanisms at least ten that we know of. When intermixed with real bloodstains, fly stains can be incredibly challenging if not impossible to recognize. Why does that matter? For one, patterns of blood can reveal directionality, especially important during reconstruction of the blood shed event. The size and shape of bloodstains reveal details about how the stains were created. Blood also contains DNA of either the victim or assailant. Fly stains can provide false information about any of these parameters if confused as true bloodstains. It is even possible that insect artifacts introduce DNA to a crime scene from someone not associated with the crime. Thus, it is absolutely essential to be able to distinguish insect stains from true bloodstains. This conference is focused on pattern evidence created by flies at crime scenes. We will examine how and when fly stains are formed on different materials, and within the context of the types of tissues and fluids associated with a corpse. We will also discuss how to recognize the presence of insect artifacts and tools currently available for crime scene investigators to reliably identify insect stains under different contexts. Ultimately, the information will help to recognize the presence of insect artifacts and avoid confusion with true bloodstains.

1. Explain the difference between insect stains and fly artifacts.

2. Understand the challenges in detection and distinction of fly stains from true bloodstains.

3. Describe the methodology available for recognition of fly stains at crime scenes.

In this session, a review of the impact of COVID-19 on the Forensic Science Program at the University of Toronto will be explored. Challenges for both undergraduate and graduate education will be discussed within the context of general forensic programming at the University/College level. Looking to the future, solutions-based ideas will be presented with an opportunity for participant reflection and participation as it relates to institutional-specific approaches to forensic pedagogy and academic research.

1. Understand unique agency/institutional challenges in pedagogy and research for forensic science programs

2. Discover how UTM has pivoted to increase virtual/remote learning in an innovative and interactive way in order to compensate for lack of on-campus experiences

3. Explore next steps and future directions for valuable educational approaches as the undergraduate and graduate levels for forensic science education and research

Genealogist have long supported the legal system through probate work, heir searching, and by identifying next-of-kin of servicemen. Even though DNA was first introduced to genealogists over 20 years ago, it has only been recently that genetic genealogy has been applied to forensic identification. The first known case successfully solved using forensic genealogy was the 1992-1993 Phoenix Canal Murders, solved in 2015 by comparing a Y-profile from a crime scene to online public genetic genealogy databases to obtain the surname of the killer. This was possible because the Y-STR markers used by the genetic genealogy community for researching family pedigrees were the same as those developed for forensic Y-STR amplification kits. More recent cases like Buckskin Girl and the Golden State Killer have been solved using autosomal SNP testing, similar to what is provided by direct-to-consumer (DTC) companies such as Ancestry.com. Because DTC companies refuse to accept forensic cases, it has been necessary to create autosomal SNP datasets using private laboratories, and to search for matches using Gedmatch, a public genetic genealogy database. This has created issues for the both the forensic and genealogical communities. SNP testing was borrowed from the biomedical industry so that forensically accredited laboratories are unable to produce the required SNP data. This has raised legal questions concerning the use of private labs and crowd-sourced genealogical data that is not as the result of peer-reviewed scientific research. Genetic genealogists are also divided in their opinions about law enforcement use of their personal data. This talk discusses how genetic genealogy developed into a tool for forensic identification. It also presents its limitations and capabilities, along with the more important issues faced by the legal system and the genealogical community in their efforts to work together to balance the privacy of the individual with the requirements of public safety.

1. To understand the development of genetic genealogy as a forensic tool

2. To understand the basic forensic genetic genealogy identification methodology

3. To review practical forensic genetic genealogical case studies

Every day in America nearly seven thousand people die. That&rsquos about two and a half million deaths per year. When a death is violent, unexpected, medically unattended, or suspicious &ndash which occurs in about one-quarter to one-third of cases &ndash it becomes necessary to determine whether the death was due to foul play. A specially trained physician, known as medical examiner or forensic pathologist, is asked to perform an autopsy to inspect the body externally and internally, and to issue a report that can close or open it to the legal system for criminal prosecution. Despite the many death cases to probe, fewer than five hundred forensic pathologists are trained and certified in the U.S.

A. Suspected Crime Scene Investigation

  1. Forensic Pathologist
  2. Criminalist
  3. Forensic Photographer
  4. Homicide Detective

B. Identification, Collection, and Preservation of Evidence

C. Performance of Autopsy by Forensic Pathologist Gross and Microscopic Examination of Body Organs and Tissues Autopsy Photographs Collection of relevant biological specimens for toxicological analysis and other studies

D. Correlation of Autopsy Findings with Medical Records, and Police Investigative Reports

E. Completion of Post-Mortem Protocol

Assessing the manner of death is the next step for the forensic pathologist, who selects one of five choices: homicide, suicide, accident, natural, or undetermined. This declaration is prominently displayed on the autopsy report and becomes part of the official record that family members, and often the news media, have access to. While it may take a few weeks for all the test results to come in, the autopsy doctor may share preliminary information with law enforcement, even though the office of the medical examiner should wholly independent from police and prosecutors. Letting authorities know that someone died at the hands of another person ensures public safety by allowing a criminal investigation to be started swiftly. If someone is arrested for murder, prosecutors and defense attorneys will receive the completed autopsy information prior to the defendant&rsquos trial, and the forensic pathologist will be a critical witness in the courtroom.

1. To understand what official medicolegal investigation is about

2. Identify how these procedures are accomplished

The success in the solving cases is based upon a system that emphasizes teamwork, from scene recognition, collection, preservation, to laboratory examination, identification , individualization, and to court testifying, interpretation, and reconstruction of the forensic evidence. If potential physical evidence is not recognized, collected, or properly preserved, the value of the physical evidence will forever lost. Numerous day-to-day and high-profile cases have demonstrated the harsh reality that, despite the availability of current analytical technologies, specialized equipment, and sophisticated forensic testing. However, forensic scientists do not usually decide the extent of forensic evidence analyzed or its use in a civil or criminal case. Ultimately, it is the police officers, detectives, crime scene technicians that usually complete the crime scene search and begin the forensic investigation stages of an investigation. After the investigation is completed and during the pretrial or initial litigation stages, the prosecution and defense counsels determine which physical evidence will be utilized. forensic scientist must be prepared not only to present evidence to the court, but also to explain how the evidence was tested, what techniques were used, the error rates and standards of operation, and anything else the judge might need to know. As always, ethics and truthfulness are of utmost importance when testifying under oath in a court of law. Forensic scientists must present all the facts to the court in an objective and fair manner. During the trial or adjudication stages, the judge determines the admissibility of the forensic evidence. There is no guarantee that any of these parties who are part of the evidentiary process will sufficiently understand the potential of forensic evidence. It is also a rare, but possible, reality that some of these individuals possess questionable integrity or character. It is also possible that the direct or crossing examination of Forensic witness are unfair and unreasonable types personal attacks. Therefore, training, education and standardization of forensic investigation are all critical for the improvement of forensic services in judicial system. As a greater number of police, forensic scientists, attorneys, Judges and general public acquire correct information about the value and limitation in forensic science and new technologies, the quality of examination of forensic evidence should continue to improve. The result would be to make forensic science even more valuable and to maintain the high quality of justice that this society deserves.

1. Knowledge of history, development, and the status of forensic evidence

2. Understand the Scientific Issues, Expert Issues, Discovery issues and the Admissibility Issues of forensic evidence

3. Learn the Legal Standards, role of evidence, Frye vs Dauber and Federal Rule702

4. Learn the scientific theory, technique falsifiable, refutable, by using empirical testing methods

5. Learn the common tactics used by lawyers to attack scientific expert and the serving skill in court

For information on becoming a sponsor or exhibitor, please click here.

You have a job to do. Whether it's identifying seized drugs, presenting reliable forensic data in court, analyzing prohibited substances in a sports doping investigation, monitoring illicit drug use, or keeping up with the demand to identify new psychoactive substances, your job . presents challenges. With Waters as your partner, you're never alone. We're here to help you overcome those challenges every day. We provide powerful forensic solutions that enable you to get the critical data you need. Our industry-leading liquid chromatography, mass spectrometry, informatics, and chemistries are designed to give you information you can trust. Our comprehensive support is always here for you. At Waters, we're committed to advancing science, and we're dedicated to you. Allow us to prove it. Contact us today to learn more about how we can support you and your laboratory.

The speakers below have been approved for Continuing Education Credits. To redeem your credits, locate the presentation you watched and click on the CE buttons for further direction. For more general information regarding continuing education, the processes to receive credits, and the accreditation bodies, Click here


Energy-Producing Steps

So far, glycolysis has cost the cell two ATP molecules and produced two small, three-carbon sugar molecules. Both of these molecules will proceed through the second half of the pathway, and sufficient energy will be extracted to pay back the two ATP molecules used as an initial investment and produce a profit for the cell of two additional ATP molecules and two even higher-energy NADH molecules (Figure 3).

During the energy-producing steps, additional enzymes continue to catalyze the breakdown of glucose (Figure 3). The end result of these reactions is two 3-carbon molecules of pyruvate.

Figure 3 The second half of glycolysis involves phosphorylation without ATP investment (step 6) and produces two NADH and four ATP molecules per glucose.

An important rate-limiting step occurs at step 6 in glycolysis. If you look at Figure 3, you will notice that during step 6, NAD + is converted into NADH. NADH contains more energy than NAD + , and is therefore a desired product from this reaction. However, the continuation of the reaction depends upon the availability NAD + . Thus, NADH must be continuously converted back into NAD + in order to keep this step going. If NAD + is not available, the second half of glycolysis slows down or stops.

If oxygen is available in the system, the NADH will be converted readily back into NAD + by the later processes in aerobic cellular respiration. However, if there is no oxygen available, NADH is not converted back into NAD + . Without NAD + , the reaction in step 6 cannot proceed and glycolysis slows or stops. In an environment without oxygen, an alternate pathway (fermentation) can provide the oxidation of NADH to NAD + .


Abstract

Dysregulation of the cardiac autonomic nervous system, as indexed by reduced heart rate variability (HRV), has been associated with the development of high blood pressure (BP). However, the underlying pathological mechanisms are not yet fully understood. This study aimed to estimate heritability of HRV and BP and to determine their genetic overlap. We used baseline data of the 3-generation Lifelines population-based cohort study (n=149 067 mean age, 44.5). In-house software was used to calculate root mean square of successive differences and SD of normal-to-normal intervals as indices of HRV based on 10-second resting ECGs. BP was recorded with an automatic BP monitor. We estimated heritabilities and genetic correlations with variance components methods in ASReml software. We additionally estimated genetic correlations with bivariate linkage disequilibrium score regression using publicly available genome-wide association study data. The heritability (SE) estimates were 15.6% (0.90%) for SD of normal-to-normal intervals and 17.9% (0.90%) for root mean square of successive differences. For BP measures, they ranged from 24.4% (0.90%) for pulse pressure to 30.3% (0.90%) for diastolic BP. Significant negative genetic correlations (all P<0.0001) of root mean square of successive differences/SD of normal-to-normal intervals with systolic BP (−0.20/−0.16) and with diastolic BP (−0.15/−0.13) were observed. LD score regression showed largely consistent genetic correlation estimates of root mean square of successive differences/SD of normal-to-normal intervals with systolic BP (range, −0.08 to −0.23) and diastolic BP (range, −0.20 to −0.27). Our study shows a substantial contribution of genetic factors in explaining the variance of HRV and BP measures in the general population. The significant negative genetic correlations between HRV and BP indicate that genetic pathways for HRV and BP partially overlap.

Introduction

Heart rate variability (HRV), reflecting beat-to-beat fluctuations in heart rate over time, 1 has emerged as one of the most widely used noninvasive indices of cardiac autonomic nervous system function. The imbalance of the autonomic cardiovascular control plays a key role in the risk for high blood pressure (BP). 2 The modulation of vagal tone helps to maintain the dynamic autonomic function necessary for short-term BP regulation. More specifically, sympathetic inhibition reduces peripheral resistance, while parasympathetic activation attenuates heart rate and contractility. Conversely, sympathetic activation and subsequent parasympathetic inactivation allow the baroreflex to elevate BP. 3 Both cross-sectional 4 and prospective 5,6 studies have shown associations between low HRV and high BP, suggesting an essential role of lower HRV in the development of hypertension.

Besides the established demographic 7 and lifestyle factors 8 that have been reported to explain the individual differences in HRV level at rest, numerous studies have shown that genetic factors contribute substantially to the variance. For example, Sinnreich et al 9 reported heritability estimates of 41% and 39% for SD of normal-to-normal intervals (SDNN) and root mean square of successive differences (RMSSD), respectively, based on 5-minute Holter recordings in the Kibbutzim family study. Recent investigations from the Oman Family Study 10,11 reported heritability estimates of 12.3% to 20.5% from 10-minute ECG recordings in a supine position. In twin studies, the genetic contribution to individual differences in HRV measures can be as high as 74%. 12,13 Likewise, previous studies reported that individual differences in BP traits, such as systolic BP (SBP), diastolic BP (DBP), and pulse pressure (PP), could for a large part be accounted for by genetic factors. 14 In a meta-analysis of published twin studies, Wang et al 15 reported that the pooled heritabilities of SBP, DBP, and PP were 54%, 49%, and 50%, respectively. Family-based studies from Switzerland, 16 Oman, 10,17 and the Seychelles 18 reported heritability estimates ranging from 19% to 24% for SBP, 5% to 25% for DBP, and 18% to 37% for PP.

Although the studies mentioned above have shown the contribution of genetic factors for individual variation in HRV and BP, heritability estimates were from relatively small samples. Moreover, most of these studies did not correctly adjust for the well-known and recently more recognized effect of heart rate on HRV. 19 Thus, large population-based family studies are needed for better understanding of the genetic effects.

Additionally, although an association between HRV and hypertension was found in many epidemiological studies, the underlying pathophysiological mechanisms are not yet fully understood. Given that HRV and BP are both heritable, we hypothesized that shared genetic factors may partly explain the negative association between the 2 traits. In this study, we aimed to (1) determine the genetic contributions to individual differences in both HRV and BP and (2) estimate the magnitude of any potential genetic overlap between HRV and BP. We used data from the population-based Lifelines Cohort Study and Biobank, 20 which with its 3-generation family design, large sample size, and broad age range is ideally suited for this study.

Methods

Data Availability

The data that were used in this study are available from the Lifelines Cohort Study and Biobank ( [email protected] nl ) upon reasonable request. Lifelines is a facility that is open for all researchers information on application and data access procedures is available on https://www.lifelines.nl/researcher/how-to-apply.

Study Setting and Population

The Lifelines Cohort Study and Biobank with >167 000 participants is a large population-based prospective study in the northern part of the Netherlands, aiming to investigate risk factors for multifactorial diseases. The design and cohort profile of the Lifelines study have been described before. 20,21 In short, baseline data were collected between 2006 and 2013. The recruitment of the Lifelines study was family based by design. Eligible participants between 20 and 50 years of age were invited to participate through their general practitioners. After the inclusion of these individuals, their partner, children, parents, and partner’s parents were also invited to participate in the study. In addition, single individuals could register for participation online. In this way, a 3-generation family study was realized. Subsequently, we used the information on family members and information on (anonymized) names and birth dates of parents provided by all participants in questionnaires to define relationships between undefined family members in Lifelines. For instance, 2 sibs could participate in Lifelines and obtained different family IDs, if they were invited by their respective spouses. This process resulted in 40 496 singletons (ie, individuals without any relative in the sample) and 30 914 families (of size ≥2) of up to 4 generations with an average family size of 4.12. The largest family connected 189 participants. Spouses without children were considered as a family of size 2, even though they are genetically unrelated. The Figure shows an example of 23-member Lifelines family extending over 4 generations.

Figure. An example of a 23-member Lifelines pedigree extending over 4 generations. Numbers: Lifelines subject identifications squares: men circles: women gray: founders.

During the baseline visit, ECG recordings were obtained from 153 793 participants (91.8%) aged ≥13 years. From these, HRV could not be calculated in 4586 participants due to excessive noise and ectopic (nonsinus node) beats and 140 were excluded due to extreme values (<5 or >5 SD from the mean) for either HRV or BP measures. In the current analysis, 149 067 individuals (1.03% children 96.5% European ancestry) with an age range of 13 to 94 years were included, of whom 40 955 were singletons. The remaining individuals belonged to a total of 29 107 families with an average family size of 3.71. The largest family connected 169 participants.

All participants signed an informed consent. The Lifelines cohort study is conducted according to the principles of the declaration of Helsinki and following the research code of University Medical Center Groningen and approved by its Medical Ethical Committee.

Measurements

A 10-second 12-lead resting ECG was recorded while participants were in a supine resting position. For the ECG recording, CardioPerfect software (Welch Allyn DT100 recorder Welch Allyn, Skaneateles Falls, NY) was used. From the 12 leads, 4 (I, II, V4, and V5) were selected to detect R peaks in the ECG. The details of HRV calculation in Lifelines have been published previously. 7 In short, ECG recordings were excluded from HRV calculation if (1) the number of beats were <5, (2) the ratio between maximal and minimal interbeat interval (IBI) exceeded 1.4 indicating a missing trigger or extrasystolic beat, (3) there was extremely low variability (defined as the SD of IBI <1.2 ms), or (4) <60% of the recording time was included in the calculation. In-house software was used to calculate the RMSSD and SDNN as indices of HRV, which are used in the present study. Heart rate has a well-established strong inverse relationship with HRV, which includes a mathematical dependency of the variance in IBI on the mean IBI that is unrelated to the underlying biology. Our group has reintroduced a recommended approach to correct HRV for its dependency on the mean IBI of consecutive R peaks using coefficients of variation. 19,22 The coefficient of variation detects the amount of IBI variability relative to the mean IBI of each participant. We applied this method to additionally calculate HRV values that were corrected for the influence of mean IBI. Heart rate was calculated from the ECG recording using 60 000/(mean IBI).

Simultaneously with the ECG recordings, BP was measured with an automatic BP monitor (DinaMap, PRO 100V2) every minute during 10 minutes while the participant was in a supine resting position. The mean of the last 3 measures was used to calculate SBP and DBP. For participants taking antihypertensive medication (see below), we corrected the measured BP values by adding 15 mm Hg for SBP and 10 mm Hg for DBP. 23 From the adjusted BP values, PP and mean arterial pressure (MAP) were calculated as PP=SBP−DBP and MAP=(2 DBP+SBP)/3.

Medications and diseases that have been reported to influence HRV were included as covariates. Participants self-reported prescription during the baseline visit, which were classified according to the Anatomical Therapeutic Chemical classification, were used to define medications. A list of Anatomical Therapeutic Chemical codes prescribed to treat hypertension, depression, type 2 diabetes mellitus, and cardiovascular diseases in the Netherlands was retrieved from the Dutch Pharmacotherapeutic Compass (https://www.farmacotherapeutischkompas.nl). Participants were considered to have cardiovascular disease if they self-reported one of the following diseases: (1) heart failure, atrial fibrillation, and vascular diseases (myocardial infarction, stroke, and aneurysm) and (2) if participants used medications related to these symptoms (β-blockers, angiotensin-converting enzyme inhibitors, diuretics, vitamin K antagonist, statins, aspirin, and clopidogrel). Any participant who had either self-reported type 2 diabetes mellitus, use of antidiabetes medication(s), a fasting blood glucose ≥7.0 mmol/L, or HbA1c ≥6.5% was considered to have type 2 diabetes mellitus. Hypertension was defined as having SBP ≥140 mm Hg or DBP ≥90 mm Hg or use of antihypertensive medication. Body mass index was calculated as weight/height squared and expressed in kg/m 2 .

Statistical Analysis

The baseline characteristics of participants, stratified by sex, were described as mean and SD for continuous traits and as percentages for categorical variables. A natural logarithmic transformation was computed to achieve approximate normality of HRV indices. ANOVA and χ 2 tests were used to compare any differences between men and women.

Heritability Estimate

Heritability is defined as the proportion of phenotypic variation explained by the genetic differences of individuals in a population. For the univariate analyses, we assumed a linear mixed model for the heritability analysis as follows: , where y is a vector of the response variable (HRV or BP) b, vector of regression coefficients for the fixed effects a, additive genetic effects with variance σ2aA f, family (shared environment) effects with variance σ2fI and e, residuals (environmental effects) with variance σ2eI. X is the design matrix of the fixed effects, Za is the design matrix mapping subjects to the genetic kinship (relationship) matrix A, and Zf is the design matrix for family (group) effects. There are no known genetic relationships between individuals in different families.

Once the total and phenotypic variances were estimated from the linear mixed model, narrow-sense heritability was calculated as , where σ 2 a is the additive genetic variance, σ 2 f is the shared environmental variance, and σ 2 e is the residual (environmental) variance. The singletons were included in the analysis and contributed to the estimations of variances and phenotypic correlations but not to the genetic correlations.

Genetic Correlation Estimate

The genetic correlations were estimated using the same designs and similar methods as used to estimate heritability in an expanded set of equations (ie, a bivariate model). In the bivariate analyses, the genetic correlations between HRV and BP were obtained from the estimated additive genetic covariance and variance components as , where σAxAy is the additive genetic covariance between traits x and y and σ2Ax and σ2Ay is the additive genetic variance for traits x and y, respectively.

To test the significance of the h 2 estimates (h 2 >0) and genetic correlations (|rG|>0), the model in which all variances and correlations were estimated was compared, using a likelihood-ratio test, to a model in which additive genetic variances and genetic correlations for all measures were constrained to be zero. Similarly, genetic correlations were constrained to be equal to 1 or −1 to test the presence of complete overlap of genetic effects (|rG|=1). 24 In all analyses, the fixed effects (or covariates) included age, sex, and body mass index. Age squared was also added to the model, to accommodate the curvilinear relationship of HRV with age. 7 In further analyses, heritability estimates of HRV indices were adjusted for a history of diseases (cardiovascular disease, hypertension, and type 2 diabetes mellitus) and antidepressant use. We did not include smoking and alcohol use as additional covariates in our models as we have previously shown that the influence of these lifestyle factors on HRV was negligible. 7 Here we also confirmed for BP that addition of these lifestyle factors as covariates to the model only had minimal impact on heritability estimates (Table S1 in the Data Supplement). We report proportion of variance explained by these covariates. The analyses were performed using ASReml 4.1 software. 25 ASReml is a statistical package that fits the linear mixed models using restricted maximum likelihood to estimate the variance components.

We additionally estimated the genetic correlations using bivariate linkage disequilibrium score regression 26 on the latest publicly available genome-wide association study summary statistics for HRV 27 and BP. 28–30

Results

Characteristics of the participants are provided in Table 1. Men had a lower heart rate. Means of RMSSD and SDNN were significantly higher in women than in men, and the difference remained significant after correcting the values for mean IBI. In general, all BP measures were significantly higher in men, and women had a lower body mass index. The prevalence of hypertension and type 2 diabetes mellitus was higher in men, but no sex difference was seen for cardiovascular diseases. The proportion of antidepressant use was significantly higher in women.

Table 1. Characteristics of Study Participants and the Distribution of Heart Rate Variability and Blood Pressure Measurements for Men and Women Separately

Means (SD) are shown unless indicated otherwise. BMI indicates body mass index BP, blood pressure DBP, diastolic blood pressure ln, logarithmic transformation lnRMSSDc, log-transformed corrected root mean square of successive differences lnSDNN, log-transformed SD of normal-to-normal intervals lnSDNNc, log-transformed corrected SD of normal-to-normal intervals MAP, mean arterial pressure PP, pulse pressure RMSSD, root mean square of successive differences RMSSDc, corrected root mean square of successive differences SBP, systolic blood pressure SDNN, SD of normal-to-normal intervals and SDNNc, corrected SD of normal-to-normal intervals.

* Age-adjusted P value.

† BP values were corrected for antihypertensive use.

Table 2 shows the proportion of phenotypic variances explained by additive genetic factors, shared family environment, and covariates for different traits. Heart rate showed a heritability of 22.7%. The univariate narrow-sense heritability estimates for the HRV indices ranged from 13.4% to 17.9%. Correcting them for the mean IBI slightly lowered h 2 estimates for both RMSSD and SDNN. The h 2 estimates for BP measures ranged from 24.4% for PP to 30.3% for DBP. SBP and MAP showed heritability estimates of 28.0% and 30.1%, respectively. In addition to the additive genetic influence, we calculated the proportions of variance explained by shared family environment and covariates of HRV, heart rate, and BP measures. The shared family environment contributed 4.5% of the variance for heart rate. The contribution of familial environmental effects to the variance of HRV was negligible. With regard to BP, we found small effects of the shared family environment that ranged from 0.9% for PP to 3.2% for MAP. The percentage of variance explained by the covariates was 18.4% for SDNN and 19.8% for RMSSD. These contributions even became slightly larger, 20.9%, and 23.7%, respectively, when the effect of mean IBI was taken into account. Similarly, the result showed a substantial contribution of covariates for BP that ranged from 19.0% for PP to 28.3% for SBP. The included covariates explained only 2.6% of the percentage of variance for heart rate.

Table 2. Heritability Estimates and Proportion of Variance Due to Shared Family Environment and Covariates of HR, HRV, and BP-Related Measures

All heritability estimates were adjusted for age, age 2 , sex, and body mass index. Heritabilities for HRV were additionally adjusted for cardiovascular disease, hypertension, type 2 diabetes mellitus, and antidepressant medication use. BP indicates blood pressure DBP, diastolic blood pressure HR, heart rate HRV, heart rate variability ln, logarithmic transformation lnRMSSD, log-transformed root mean square of successive differences lnRMSSDc, log-transformed corrected root mean square of successive differences lnSDNN, log-transformed SD of normal-to-normal intervals lnSDNNc, log-transformed corrected SD of normal-to-normal intervals MAP, mean arterial pressure PP, pulse pressure PVC, proportion of variance explained by covariates and SBP, systolic blood pressure.

* All heritability estimates were significantly different from zero.

† BP values were corrected for antihypertensive use.

In Table 3, the bivariate genetic, phenotypic, and environmental correlations between HRV and BP are shown. We found a significant genetic correlation of −0.60 between RMSSD and heart rate and −0.53 between SDNN and heart rate. These correlations were reduced to −0.38 and −0.22, respectively, when HRV values were corrected for the effect of mean IBI. There were negative genetic correlations between RMSSD and BP-related traits with a correlation of −0.20 with SBP, −0.15 with DBP, −0.18 with MAP, and −0.14 with PP that were consistent significantly different from zero. Similarly, there were slightly smaller but significantly consistent correlations between SDNN and the BP measures. However, the correlation between SDNN and PP was not statistically significant. When repeating the analyses with HRV measures corrected for the mean IBI, phenotypically HRV and BP measures still showed negative correlations (ranging from −0.05 to −0.07) with the highest correlation observed between corrected RMSSD and SBP, but they were smaller than for the uncorrected HRV values. Nevertheless, most genetic correlations remained significantly different from zero. All genetic correlations were significantly different from 1. The bivariate linkage disequilibrium score regression largely confirmed direction and magnitude of our results based on the Lifelines pedigree data with estimates of rG with RMSSD and SDNN ranging from −0.08 to −0.23 for SBP and −0.20 to −0.27 for DBP (Table 4).

Table 3. Bivariate Quantitative Genetic Analyses of HR Variability With HR and Blood Pressure Examining the Genetic, Phenotypic, and Environmental Correlations

Models were adjusted for age, age 2 , sex, and BMI. BMI indicates body mass index DBP, diastolic blood pressure HR, heart rate lnRMSSD, log-transformed root mean square of successive differences lnRMSSDc, log-transformed corrected root mean square of successive differences lnSDNN, log-transformed SD of normal-to-normal intervals lnSDNNc, log-transformed corrected SD of normal-to-normal intervals MAP, mean arterial pressure PP, pulse pressure rE, environmental correlation rG, genetic correlation rP, phenotypic correlation and SBP, systolic blood pressure.

* Nonsignificant genetic correlations (P>0.05).

Table 4. Genetic Correlations Between HRV and Blood Pressure as Computed Using Bivariate Linkage Disequilibrium Score Regression Based on Genome-wide Association Study Summary Statistics

The genome-wide association study summary statistics for HRV traits derive from Nolte et al. 27 DBP indicates diastolic blood pressure HRV, heart rate variability rG, genetic correlation RMSSD, root mean square of successive differences SBP, systolic blood pressure and SDNN, SD of normal-to-normal intervals.

Discussion

In this study, we showed that genetic factors significantly influence the variance in HRV and BP. Our results also show significant negative genetic correlations between HRV and BP measures, which suggests shared genetic factors partly explain the inverse association between these traits.

In the present study, we found similar heritability estimates for RMSSD and SDNN as in the Framingham Heart Study 31 and in recent reports from the Oman Family Study. 10,11 Our results were lower than heritability estimates from the Kibbutzim family study in Israel, 39% for SDNN and 41% for RMSSD. 9 The possible explanation for relatively higher heritability values in the Israel study might be due to an inclusion of more homogenous samples. Our heritability estimates for SDNN and RMSSD remained significant after taking into account the effect of mean heart rate on HRV measures. This finding confirms that the genetic influence in HRV measures is largely independent of the influence of heart rate. Our results also showed substantial heritability of BP measured in supine position ranging from 24.4% for PP to 30.3% for DBP. These findings are slightly higher than heritability estimates reported in previous family studies, 10,16,17 which might be due to the large sample size. A small study in Sweden reported that none of the BP measures were heritable, 32 and another small study conducted in the Seychelles reported a nonsignificant heritability for DBP but a higher one than we found for PP (h 2 =24.4%). 18 However, both of these studies were likely underpowered (n=260 and 314 participants, respectively). Only one family study from Nigeria 33 reported a heritability estimate for MAP and found a slightly higher heritability compared with our study (36% versus 30%). Differences in heritability estimates could be varying between populations due to different ethnic backgrounds of study participants.

The present study shows consistently significant (from zero) and negative genetic correlations between HRV and BP, particularly with SBP and DBP. This is in line with previous findings from cross-sectional 4,34 and prospective studies 5,6 that reported lower HRV is associated with high BP. This confirms our hypothesis that shared genetic factors might also explain the negative associations between HRV and BP. In agreement with this, the genetic correlations using bivariate linkage disequilibrium score regression on genome-wide association study summary statistics for HRV 27 and BP 28–30 show largely consistent estimates. On the contrary, the Oman Family study 10 recently reported a nonsignificant genetic correlation. However, the authors admittedly recognize that the sample size of their study might have been too small to detect the underlying genetic contributions.

A major strength of our study is that it constitutes the largest family study to date on the subject, providing ample power for precise estimates of heritability and genetic overlap. Unlike previous studies, 9–11 we applied a recommended correction method to adjust the well-known effect of heart rate on HRV to recalculate heritability estimates. Furthermore, familial effects were included in the analysis to minimize overinflation of the heritability estimates due to the common environment shared by family members. Our study, however, acknowledges the following limitations. First, antihypertensive treatments might obscure familial contributions to BP variations. 35 However, we adjusted the measured BP values by adding 15 mm Hg to measured SBP and 10 mm Hg to measured DBP level as recommended by Tobin et al. 23 This correction approximates pretreatment BP values and may restore the correct population ranking of BP values to a large extent and optimize estimates of the genetic variance component. Second, because a vast majority (96.5%) of participants are of Caucasian ancestry, the results of this study cannot be generalized to other ethnicities. Finally our study was limited to time-domain parameters SDNN and RMSSD because ultrashort ECG recordings are too short to assess frequency-domain HRV parameters. 1

To conclude, in this large family study, we showed that the contribution of genetic factors to the variance of HRV (ranging from 15.6% to 17.9%) and BP (ranging from 24.4% to 30.3%) in the general population are substantial. The significant negative genetic correlations between HRV and BP indicate that genes (genetic pathways) for HRV and BP partially overlap.

Perspectives

This implies the need for more studies in the future to identify more genetic variants as the variance explained by known common single-nucleotide polymorphisms discovered in previous genome-wide association studies is still low (0.9%–2.3% for HRV measures and 2.9%–5.7% for BP measures). 27,30 The negative genetic correlations between HRV and BP are in line with a causal effect of cardiac vagal control in the development of hypertension. However, some more discussion is warranted what the results may mean: (1) pleotropic genes, (2) causal effect, or (3) reverse causation, meaning an increase in BP causes compensatory changes in cardiac autonomic nervous system activity that lead to reduced HRV. Mendelian randomization—a statistical approach to test the direction of causality in epidemiologic studies—may be considered in future studies to overcome this problem. If a causal effect was to be confirmed in future studies, this would have potential implications for prevention and treatment of hypertension.

Acknowledgments

The Lifelines Biobank initiative has been made possible by subsidy from the Dutch Ministry of Health, Welfare and Sport, the Dutch Ministry of Economic Affairs, the University Medical Center Groningen (UMCG the Netherlands), University Groningen and the Northern Provinces of the Netherlands. We are grateful for the services of the Lifelines Cohort Study and Biobank, the research support staffs, and all the study participants. We would also like to thank Dr Arthur Gilmour for his help in running ASReml software.


Conclusion

Pedagogical studies in biology and general science emphasize the efficacy of hands-on, inquiry-based activities that actively engage students in the learning process (Hake 1998 Alters and Nelson 2002 Smith et al. 2005 Nelson 2008). We have described a game in which students work cooperatively in small groups to generate their own data for an independent test of the central prediction of the Red Queen Hypothesis. In pursuing this specific goal, students personally engage with the broader concepts of rapid coevolution and frequency-dependent selection. We offer this basic exercise as a fun and inexpensive tool for teaching evolution at the undergraduate and advanced high school level.


Watch the video: Πασχάλης Τερζής - Αυτά είναι τα τραγούδια μου 45 επιτυχίες by Elias (January 2023).