webnovel

Decay Farea

X_Novel · Romance
Pas assez d’évaluations
5 Chs

Decay Fare

You are in the world ,

Such as you running ,Radioactive decay by emitting an alpha particle in the world because of you

The emission of a nucleus of a helium atom from the nucleus of an element, generally of a heavy element, in the process of its radioactive decay

The radioactive decay of an atomic nucleus by emission of an alpha particle. Type of radioactive disintegration (see radioactivity) in which some unstable atomic nuclei dissipate excess energy by spontaneously ejecting an alpha particle. Alpha particles have two positive charges and a mass of four atomic mass units; they are identical to helium nuclei. Though they are emitted at speeds about one-tenth that of light, they are not very penetrating and have ranges in air of about 1-4 in. (2.5-10 cm). Alpha decay commonly occurs in elements with atomic numbers greater than 83 (bismuth), but can occur in some rare-earth elements in the atomic-number range of 60 (neodymium) to 71 (lutetium). Alpha decay half-lives range from about a microsecond (10^-6 second) to billions of years (10^17 seconds)

The process of radioactive decay in which the nucleus of an atom emits an alpha particle The new atom's atomic number is lower by two and its atomic mass number is reduced by four

radioactive decay of an atomic nucleaus that is accompanied by the emission of an alpha particle

the loss of an alpha particle during radioactive decay

I am alpha

The emission of an alpha particle by the nucleus The atomic number of the parent nucleus is decreased by two and the atomic weight is decreased by four

The radioactive decay of a nucleus via emission of an alpha particle

The particle

Such as The Large Hadron Collider is the world's largest particle accelerator.

A body with very small size; a fragment

An elementary particle or subatomic particle

A word that has a particular grammatical function but does not obviously belong to any particular part of speech, such as the word to in English infinitives or O as the vocative particle

{n} a small part or word, an atom

A subordinate word that is never inflected (a preposition, conjunction, interjection); or a word that can not be used except in compositions; as, ward in backward, ly in lovely

A crumb or little piece of concecrated host

{i} tiny portion, very small fragment; grain; preposition; conjunction

In physics, a particle is a piece of matter smaller than an atom, for example an electron or a proton

In grammar, a particle is a preposition such as `into' or an adverb such as `out' which can combine with a verb to form a phrasal verb. Higgs particle particle accelerator particle physics subatomic particle elementary particle W particle wave particle duality Z particle

The smaller hosts distributed in the communion of the laity

A word that has a particular grammatical function but does not obviously belong to any particular part of speech, such as the world to in English infinitives

Any very small portion or part; the smallest portion; as, he has not a particle of patriotism or virtue

a function word that can be used in English to form phrasal verbs a body having finite mass and internal structure but negligible dimensions

A particle of something is a very small piece or amount of it. There is a particle of truth in his statement. food particles

A minute part or portion of matter; a morsel; a little bit; an atom; a jot; as, a particle of sand, of wood, of dust

İlgili Terimler

particle accelerators: plural form of particle accelerator

particle beam: A beam of atoms, ions or subatomic particles that has been accelerated by some device and collimated by magnets and/or electrostatic lenses

particle board: a structural material manufactured from wood particles (such as chips and shavings) by pressing, and binding through resin

particle boards: plural form of particle board

particle energies: plural form of particle energy

particle energy: The sum of a particle's potential energy, kinetic energy and rest energy

particle mechanics: The study of the motion of individual particles

particle physics: A branch of physics that studies the elementary constituents of matter and radiation, and the interactions between them

particle theory: atomic theory

Particle image velocimetry: Particle image velocimetry (PIV) is an optical method used to measure velocities and related properties in fluids. The fluid is seeded with particles which, for the purposes of PIV, are generally assumed to faithfully follow the flow dynamics. It is the motion of these seeding particles that is used to calculate velocity information

particle board: Material made in rigid sheets from compressed wood chips and resin and used for furniture and in buildings

particle swarm optimization: (Bilgisayar) Particle swarm optimization (PSO) is a stochastic, population-based computer problem-solving algorithm; it is a kind of swarm intelligence that is based on social-psychological principles and provides insights into social behavior, as well as contributing to engineering applications

particle accelerator: A particle accelerator is a machine used for research in nuclear physics which can make particles that are smaller than atoms move very fast. A device, such as a cyclotron or linear accelerator, that accelerates charged subatomic particles or nuclei to high energies. Also called atom smasher. a machine used in scientific studies which makes the very small pieces of matter that atoms are made of move at high speeds. Device that accelerates a beam of fast-moving, electrically charged atoms (ions) or subatomic particles. Accelerators are used to study the structure of atomic nuclei (see atom) and the nature of subatomic particles and their fundamental interactions. At speeds close to that of light, particles collide with and disrupt atomic nuclei and subatomic particles, allowing physicists to study nuclear components and to make new kinds of subatomic particles. The cyclotron accelerates positively charged particles, while the betatron accelerates negatively charged electrons. Synchrotrons and linear accelerators are used either with positively charged particles or electrons. Accelerators are also used for radioisotope production, cancer therapy, biological sterilization, and one form of radiocarbon dating

particle accelerator: machine that increases the speed of charged particles to a high rate using electrical energy fields

particle beam: A beam of atoms or subatomic particles that have been accelerated by a particle accelerating device, aimed by magnets, and focused by a lens

- A positron is a small particle similar to an electron, but with a positive electric charge.

Z particle

Nuclear Instruments and Methods

Volume 31, Issue 1, 1 December 1964, Pages 1-12

A new particle identifier technique for Z = 1 and Z = 2 particles in the energy range > 10 MeV

Show more

Outline

https://doi.org/10.1016/0029-554X(64)90313-1Get rights and content

Abstract

Protons, deuterons, tritons, helium-3 and α particles produced in nuclear reactions have previously been identified by use of ΔE and E counters to determine dE/dx and E. Multiplying these together produces an output that is dependent on the type of particle. This technique is based on the theoretical relationship between dE/dx, E and the mass and charge of the particle. Unfortunately there is an obvious restriction on the technique, since dE/dx changes as the particle passes through the ΔE counter. For the E(dE/dx) product to have any real meaning, the ΔE counter must be thin and absorb only a small part of the total energy. This limits the use of this technique in a given experiment to small energy ranges and to selected types of particle.

The new identifier also uses a ΔE (thickness T) and E counter, but employs the empirical relationship: Range = aE1.73, where a depends on the type of particle. Using the relationship, one can showT/a = (E+ΔE)1.73−E1.73. The identifier employs logarithmic elements to calculate this quantity and to produce an output that has a fixed value for each type of particle. The thickness of the ΔE counter is not limited to a very small value, and the identifier can cope with mixtures of all five types of particle, each covering a fairly wide energy range. Experimental results using this identifier and lithium drifted silicon detectors are presented and illustrate the clear separation between He3 and He4 particles (difficult to achieve with the multiplier type of identifier).

You don't it

Specific code comprised of alpha, numeric or the combination thereof that identifies each published airline tariff fare

fare basis: The specific fare for a ticket at a designated level of service; specified by one or more letters or by a combination of letters and numbers Example: The letter "Y" designates coach service on an airline

fare basis: Determines how many miles or points are earned based on the fare paid For example, to accumulate mileage on some airlines, you must pay a published full-fare rate; some programs do not award miles for highly discounted fares Some hotels require that you pay corporate rates or higher to accrue points

fare basis: A term for the price category in which a passenger's ticket is charged

fare increase: increase in the sum charged for riding in a public conveyance

fare stage: {i} (British) section of bus route used for calculating the fare; segment of a bus route for which the fare is the same

fare-stage: a section along the route of a bus for which the fare is the same

fare-thee-well: state of perfection; the utmost degree; "they polished the furniture to a fare-thee-well"

fare-thee-well: state of perfection; the utmost degree; "they polished the furniture to a fare-thee-well

batchelor's fare: Bread and cheese and kisses

Skip to Main content

Human Genome Project

Related terms:

Bioinformatics

Polymorphism

Engineered System

Genetic Variant

Genome Research

View all Topics

Human Genome Project: German Perspective

J. Maurer, H. Lehrach, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 The Human Genome Project

The Human Genome Project differs from any previous biological or medical project in size and cost. Its ambitious goal is the deciphering (sequencing) of all 3 billion building blocks of our genetic make-up—the so-called DNA—by 2005, the identification of all genes encoded in that DNA, and the understanding of the role of these genes in health and disease. The knowledge of these genes and their function is crucial for basic biological research as well as for the improvement of prevention, diagnostics, and therapy of disease. It is the prerequisite for a targeted design of pharmaceuticals and for novel approaches like gene therapy. This knowledge poses hope to millions of affected people and contains also an immense economic potential.

The Human Genome Project was started in the USA in 1990, James Watson, the codiscoverer of the DNA structure being its first co-ordinator. It was clear from the beginning, due to the estimated cost of US $ 3 billion and the immense amount of work involved, that the Human Genome Project had to include many countries. The Human Genome Organization (HUGO), an independent international organization of genome scientists, was established to co-ordinate the duties. The US Human Genome Project started off with considerable public funding (US $ 87 million in 1990) and was soon followed by the UK and France. Between 1991 and 1996 France contributed a comprehensive genetic map of the human genome. It is noteworthy that this work was predominantly financed by private money from a patients' association. The British Wellcome Trust for its part set up the world's largest sequencing facility, the Sanger Centre. Smaller initiatives later emerged, for instance in Japan and Canada. No such activity, however, was seen in Germany until 1995. By 2001 a nearly complete 'working draft' of the human genome has been presented by the publicly funded Human Genome Project, including a significant German contribution. The complete sequence will be available in public databases some time ahead of schedule, probably by 2003. In 1998 an emerging competition to the Human Genome Project by private companies, namely by Craig Venter's Celera, had speeded up the deciphering of the human genetic code tremendously. But the knowledge of our comprehensive genetic make-up is not considered as a benefit for mankind by everybody. Profound ethical issues are raised by the possibilities inherent in this knowledge. This had been realized by the founders of the project and therefore, about 3 percent of the budget was dedicated to exploring the ethical, legal, and social implications of the Human Genome Project. In most countries participating in the Human Genome Project, an extensive discussion took place about the opportunities and risks attached to these novel technologies. In Germany a parliamentary committee was established to elucidate the topic. But the discussion was much more polarized in Germany than in other countries.

View chapterPurchase book

Human Genome Project: Japanese Perspective

T. Gojobori, in International Encyclopedia of the Social & Behavioral Sciences, 2001

The Japanese human genome project began in 1988 as a response to the progress in the corresponding activities in the United States and Europe. At the outset the human genome project in Japan was confronted by criticism from various sources. There was a strong belief amongst certain group that this kind of project was not true research but rather mere routine work. Moreover, difficulties were experienced in relation to the scarcity of technicians skillful in this area who could support the genome researchers and also in the reluctance of government and private institutions to invest large grants in a single project. Japan overcame these difficulties by redefining the human genome project in the following way. In Japan, the aim of the human genome project was redirected not only to sequence the genome but also to include functional analysis of genes and elucidation of tertiary protein structures. Although this redefinition of the human genome project has successfully eased various criticisms, the main focus of the project has been diffused to a considerable extent, leading to an ambiguity of the Japanese contribution to the international effort of human genome sequencing. A successful outlook for the Japanese human genome project will be in placing more emphasis on the promotion of functional genomics, comparative genomics, analysis of genomic diversity, and the development of DNA chips or microarrays.

View chapterPurchase book

Measurement

Jules J. Berman Ph.D., M.D., in Principles of Big Data, 2013

Gene Counting

The Human Genome Project is a massive bioinformatics project in which multiple laboratories helped to sequence the 3 billion base pair haploid human genome (see Glossary item, Human Genome Project). The project began its work in 1990, a draft human genome was prepared in 2000, and a completed genome was finished in 2003, marking the start of the so-called postgenomics era. There are about 2 million species of proteins synthesized by human cells. If every protein had its own private gene containing its specific genetic code, then there would be about 2 million protein-coding genes contained in the human genome. As it turns out, this estimate is completely erroneous. Analysis of the human genome indicates that there are somewhere between 20,000 and 150,000 protein-coding genes. The majority of estimates come in at the low end (about 25,000 genes). Why are the current estimates so much lower than the number of proteins and why is there such a large variation in the lower and upper estimates (20,000 to 150,000)?

Counting is difficult when you do not fully understand the object that you are counting. The reason that you are counting objects is to learn more about the object, but you cannot always count an object accurately until you have already learned a great deal about the object. Perceived this way, counting is a bootstrapping problem. In the case of proteins, a small number of genes can account for a much larger number of protein species, because proteins can be assembled from combinations of genes, and the final form of a unique protein can be modified by so-called post-translational events (folding variations, chemical modifications, sequence shortening, clustering by fragments, etc.). The methods used to count protein-coding genes can vary.80 One technique might look for sequences that mark the beginning and the end of a coding sequence, whereas another method might look for segments containing base triplets that correspond to amino acid codons. The former method might count genes that code for cellular components other than proteins, and the latter might miss fragments whose triplet sequences do not match known protein sequences.81 Improved counting methods are being developed to replace the older methods, but a final number evades our grasp.

The take-home lesson is that the most sophisticated and successful Big Data projects can be stumped by the simple act of counting.

View chapterPurchase book

Alcoholism: Genetic Aspects

K.E. Browman, J.C. Crabbe, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.2 Quantitative Trait Loci (QTL) mapping strategies

The Human Genome Project has also led to genome mapping and DNA sequencing in a variety of other organisms including the laboratory mouse. Late twentieth-century developments in the physical mapping of the mouse make positional cloning of genes involved in various behaviors more likely. However, most behaviors (including responses to alcohol) are influenced by multiple genes. Behaviors, or complex traits, influenced by a number of genes are often termed quantitative traits. Within a population, a quantitative trait is not all-or-none, but differs in the degree to which individuals possess it. A section of DNA thought to harbor a gene that contributes to a quantitative trait is termed a quantitative trait locus (QTL). QTL mapping identifies the regions of the genome that contain genes affecting the quantitative trait, such as an alcohol response. Once a QTL has been located, the gene can eventually be isolated and its function studied in more detail. Thus, QTL analysis provides a means of locating and measuring the effects of a single gene on alcohol sensitivity.

In tests of sensitivity to convulsions following alcohol withdrawal, QTLs have been found on mouse chromosomes 1, 2, and 11. The QTL on chromosome 11 is near a cluster of GABAA receptor subunit genes. A number of subunits are needed to make a GABAA receptor, and the ability of a drug to act on the receptor seems to be subunit dependent. A polymorphism in the protein-coding sequence for Gabrg2 (coding for the γ2 subunit of the GABAA receptor) has been identified. This polymorphism is genetically correlated with duration of loss of righting reflex and a measure of motor incoordination following alcohol administration.

The use of QTL analysis has allowed us to begin the process of identifying the specific genes involved in alcohol related traits. Because each QTL initially includes dozens of genes, not all of which have yet been identified, it will require much more work before each QTL can be reduced to a single responsible gene. For the time being, one important aspect of QTL mapping in mice is that identification of a QTL in mice points directly to a specific location on a human chromosome in about 80 percent of cases. Thus, the animal mapping work can be directly linked to the human work in studies such as the COGA described in Sect. 1.1, which is in essence a human QTL mapping project. By using transgenic animal models (mice in which there has been a deliberate modification of the genome), such as null mutants, QTLs can be further investigated.

View chapterPurchase book

Y-chromosomes and Evolution

A. Ruiz-Linares, in International Encyclopedia of the Social & Behavioral Sciences, 2001

6 Conclusion

Developments resulting from the Human Genome Project have recently catapulted the use of Y-chromosome markers into the forefront of the study of human population origins and diversification. The large number of markers currently available together with novel highly efficient technologies enable analyses of an unprecedented resolution and scale. Evolutionary analyses are facilitated by the fact that slowly evolving markers allow the unambiguous assessment of the evolutionary relationship between Y-chromosomes. Rapidly evolving markers can refine analyses within specific lineages or populations. These studies illuminate not only questions related to the origin of our species and its early diversification but also allow the probing of more recent demographic events. The synthesis of genetic data with information obtained from sources including geology, paleoanthropology, archaeology, and historical demography is allowing a refined reconstruction of human evolution stretching from our origins as a species all the way to the exploration of quite recent historical events.

View chapterPurchase book

Cytomics: From Cell States to Predictive Medicine

G. Valet, ... A. Kriete, in Computational Systems Biology, 2006

A Single-cell image analysis

One of the most important outcomes of the Human Genome Project is the realization that there is considerably more biocomplexity in the genome and the proteome than previously appreciated (Herbert 2004). Not only are there many splice variants of each gene system, but some proteins can function in entirely different ways (in different cells and in different locations of the same cell), lending additional importance to the single-cell analysis of laser scanning cytometry and confocal microscopy. These differences would be lost in the mass spectroscopy of heterogeneous cell populations. Hence, cytomics approaches may be critical to the understanding of cellular and tissue functions.

Fluorescence microscopy represents a powerful technology for stoichiometric single-cell-based analysis in smears or tissue sections. Whereas in the past the major goal of microscopy and imaging was to produce high-quality images of cells, in recent years an increasing demand for quantitative and reproducible microscopic analysis has arisen. This demand came largely from the drug discovery companies, but also from clinical laboratories. Slide-based cytometry is an appropriate approach for fulfilling this demand (Tarnok and Gerstner 2002). Laser scanning cytometry (Gerstner et al. 2002; Tarnok and Gerstner 2002; Megason et al. 2003) was the first of this type of instrument to become commercially available, but today several different instruments are on the market (Jager et al. 2003; Molnar et al. 2003; Schilb et al. 2004).

These types of instruments are built around scanning fluorescence microscopes that are equipped with either a laser (Tarnok and Gerstner 2002; Schilb et al. 2004) or a mercury arc lamp as the light source (Bajaj et al. 2000; Molnar et al. 2003). The generated images are processed by appropriate software algorithms to produce data similar to flow cytometry. Slide-based cytometry systems are intended to be high-throughput instruments, although at present they have a lower throughput than flow cytometers. These instruments allow multicolor measurements of high complexity (Gerstner et al. 2002; Ecker and Steiner 2004) comparable to or exceeding that of flow cytometers.

A substantial advantage over flow cytometry is that cells in adherent cell cultures and tissues can be analyzed without prior disintegration (Smolle et al. 2002; Kriete et al. 2003; Ecker et al. 2004; Gerstner et al. 2004). In addition, due to the fixed position of the cells on the slide or in the culture chamber cells can be relocated several times and reanalyzed. Even restaining and subsequent reanalysis of each individual cell is feasible. Because a high information density on the morphological and molecular pattern of single cells can be acquired by slide-based cytometry, it is an ideal technology for cytomics.

Although at present not realized, the information density per cell can be increased further by implementing technologies such as spectral imaging (Ecker et al. 2004), confocal cytometry (Pawley 1995), fluorescence resonance energy transfer (FRET) (Jares-Erijman and Jovin 2003; Ecker et al. 2004; Peter and Ameer-Beg 2004), near-infrared Raman spectroscopy (Crow et al. 2004), fluorescence lifetime imaging (FLIM) (Murata et al. 2000; Peter and Ameer-Beg 2004), optical coherence tomography (Boppart et al. 1998), spectroscopic optical coherence tomography (Xu et al. 2004), and second harmonic imaging (Campagnola et al. 2003). All of these technologies mark the progress in optical bio-imaging.

In the future, developments in imaging resulting from a family of concepts that allows image acquisition far beyond the resolution limit (down to the nm range) are expected. These include multiphoton excitation (Manconi et al. 2003), ultrasensitive fluorescence microscopes (Hesse et al. 2004), stimulated emission depletion (STED) microscopy (Hell 2003), spectral distance microscopy (Esa et al. 2000), atomic force microscopy (AFM) and scanning near-field optical microscopy (SNOM) (Rieti et al. 2004), and image restoration techniques (Holmes and Liu 1992). Using laser ablation in combination with imaging, even thick tissue specimens can be analyzed on a cell-by-cell basis (Tsai et al. 2003).

View chapterPurchase book

Ethical Dilemmas: Research and Treatment Priorities

M. Betzler, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2.2 Ethical Dilemmas in Biotechnology

The far-reaching potential advances in gene therapy and genetic engineering for humans (The Human Genome Project), and the implications for humans of cloning, have given rise to ethical dilemmas scientists have to face. The following examples can illustrate how progress in genetic engineering generates dilemmas between conflicting obligations and/or conflicts due to risk assessment (Chadwick 1992).

The identification of human genes, for example, can lead to the following conflict: on the one hand genetic knowledge enhances therapies for hereditary disease, on the other hand, it poses problems about the potentially exploitative use of resources and genetic information. The attempt to sequence the entire human genome raises ethical questions whether the risks of exploitation outweigh the benefits of knowledge.

Genetic alterations passed on to future generations through so-called germline therapy raise further problems regarding consent while, at the same time, 'improving' human genetic potential for future generations. Do we have an obligation to present generations to relieve suffering by seeking treatments for genetic disease or do future generations have a right to an unmodified genetic inheritance? Equally, the advantages of genetic screening can be outweighed by the costs of stigmatization on the basis of a person's genetic make-up. Such individuals might find themselves unemployable or uninsurable. There also arises the question whether the very existence of genetic screening can exert pressure on individuals with regard to their reproductive decisions, thus impairing their autonomy. How can genetic public health be fostered without practicing eugenics? Arguments about the moral urgency of relieving suffering also conflict with arguments about human dignity that discredit the production of 'designer babies' (Annas and Elias 1992).

The incorporation of foreign genes into the genome of an organism is commonly discussed in connection with animals and plants. One dilemmatic issue concerns the interests of the host organism (particularly in the case of animals), the consequences for human health and for other species, and the risks of releasing genetically engineered organisms into the environment.

A related issue has been the matter of justification of treatment. Animal experimentation poses the question whether animals have moral status to the detriment of life-enhancing research results for humans. As subjects of genetic engineering, for example, farm animals have suffered from unintended deleterious effects, while research animals have suffered the consequences of being intentionally bred for propensity to develop debilitating diseases. A further important issue is related to the question whether an agent who had moral status can cease to have that status. The human cases of the brain dead, anencephalic infants, and those in a permanently vegetative state are cases in point. If so, then the case of xenotransplantation or the transplanting of animal organs into humans is affected. The interlocking questions of moral standing, justification of treatment, and loss of moral considerability can thus cause conflicts as to which consideration should be given more weight. Ethical dilemmas in research are thus a challenge to those within and those outside research, to debate whether research practices and their effects are right and just. For further treatment see Animal Rights in Research and Research Application; Bioethics: Examples from the Life Sciences; Euthanasia; Genetic Counseling: Historical, Ethical, and Practical Aspects; Reproductive Medicine: Ethical Aspects; Research Subjects, Informed and Implied Consent of.

View chapterPurchase book

Introduction to Human Genome Computing Via the World Wide Web

Lincoln D. Stein, in Guide to Human Genome Computing (Second Edition), 1998

3.5 GDB

GDB, the Genome Database, is the main repository for all published mapping information generated by the Human Genome Project. It is a species-specific database: only Homo sapiens maps are represented. Among the information stored in GDB is:

genetic maps

physical maps (clone, STS and fluorescent in situ hybridization (FISH) based)

cytogenetic maps

physical mapping reagents (clones, STSs)

polymorphism information

citations

To access GDB, connect to its home page (Figure 1.15). GDB offers several different ways to search the maps:

Sign in to download full-size image

Figure 1.15. The GDB home page provides access to the main repository for human genome mapping information.

A simple search. This search, accessible from GDB's home page, allows you to perform an unstructured search of the database by keyword or the ID of the record. For example, a keyword search for 'insulin' retrieves a list of clones and STSs that have something to do either with the insulin gene or with diabetes mellitus.

Structured searches. A variety of structured searches available via the link labeled 'Other Search Options' allow you to search the database in a more deliberate manner. You may search for maps containing a particular region of interest (defined cytogenetically, by chromosome, or by proximity to a known marker) or for individual map markers based on a particular attribute (e.g. map position and marker type). GDB also offers a 'Find a gene' interface that searches through the various aliases to find the gene that you are searching for.

Searches that recover individual map markers and clones will display them in a list of hypertext links similar to those displayed by Entrez and PDB. When you select an entry you will be shown a page similar to Figure 1.16. Links on the page lead to citation information, information on maps this reagent has been assigned to, and cross-references to the GenBank sequence for the marker or clone. GDB holds no primary sequence information, but the Web's ability to interconnect databases makes this almost unnoticeable.

Sign in to download full-size image

Figure 1.16. GDB displays most entries using a text format like that shown here.

A more interesting interface appears when a search recovers a map. In this case, GDB launches a Java applet to display it. If multiple maps are retrieved by the search, the maps are aligned and displayed side by side (Figure 1.17). A variety of settings allows you to adjust the appearance of the map, as well as to turn certain maps on and off. Double clicking on any map element will display its GDB entry in a separate window.

Sign in to download full-size image

Figure 1.17. GDB maps are displayed using an interactive Java applet.

View chapterPurchase book

Human Evolutionary Genetics

J.L. Mountain, in International Encyclopedia of the Social & Behavioral Sciences, 2001

7.1 Human Genome Project

The rapid development of human evolutionary genetics over the final decade of the twentieth century owes a great deal to the Human Genome Project. Much of the technology developed under the auspices of that project (e.g., improved speed and accuracy of DNA sequencing methods) is currently applied in the field. In 1991, a group of human evolutionary geneticists led by L. Cavalli-Sforza proposed that the Human Genome Project be extended to include consideration of variation among individuals. Almost immediately, the proposal encountered criticism from groups concerned that geneticists might exploit individuals contributing DNA samples. Nonetheless, a subsequent Human Genome Project plan did include consideration of variation across individuals as a major goal. New methods of detecting variation among individuals were developed. The result is hundreds of thousands of nucleotide sites known to be polymorphic in humans across the entire human genome. Very few, however, have been studied in a broad set of human populations.

View chapterPurchase book

Familial Studies: Genetic Inferences

A. Vetta, C. Capron, in International Encyclopedia of the Social & Behavioral Sciences, 2001

11 The Future?

McGuffin et al. (2001) envisage a new science of behavior genomics. We suspect that as the unscientific nature of behavior genetic analysis becomes known, researchers will eschew heritability analysis. HGP has made the identification of a genetic disorder easier. If, for example, a large number of individuals suffering from a disorder have mutation at a locus as compared with the normal type, this provides some evidence of the genetic nature of the disorder. Heritability analysis is useless as it relates to a population and not an individual. To find remedies for genetic disorders, type I models are useful. Venter (2001, The Independent, February 12) succinctly summarizes our view when he says, HGP indicates 'to me that we are not hard wired. The idea that there is a simple deterministic explanation—that is: we are the sum total of our genes—makes me as a scientist, want to laugh and cry.'