Search This Blog

Translate

خلفيات وصور / wallpapers and pictures images / fond d'écran photos galerie / fondos de pantalla en i

Buscar este blog

11/1/25

 


Decision-Making in Clinical Medicine

29CHAPTER 4

In general, the use of concurrent controls is vastly preferable to that

of historical controls. For example, comparison of current surgical

management of left main CAD with medically treated patients with left

main CAD during the 1970s (the last time these patients were routinely

treated with medicine alone) would be extremely misleading because

“medical therapy” has substantially improved in the interim.

Randomized controlled clinical trials include the careful prospective

design features of the best observational data studies but also include

the use of random allocation of treatment. This design provides the

best protection against measured and unmeasured confounding due to

treatment selection bias (a major aspect of internal validity). However,

the randomized trial may not have good external validity (generalizability) if the process of recruitment into the trial resulted in the exclusion of many potentially eligible subjects or if the nominal eligibility for

the trial describes a very heterogeneous population.

Consumers of medical evidence need to be aware that randomized

trials vary widely in their quality and applicability to practice. The

process of designing such a trial often involves many compromises.

For example, trials designed to gain U.S. Food and Drug Administration (FDA) approval for an investigational drug or device must fulfill

regulatory requirements (such as the use of a placebo control) that may

result in a trial population and design that differ substantially from

what practicing clinicians would find most useful.

■ META-ANALYSIS

The Greek prefix meta signifies something at a later or higher stage of

development. Meta-analysis is research that combines and summarizes

the available evidence quantitatively. Although it is used to examine

nonrandomized studies, meta-analysis is most useful for summarizing

all available randomized trials examining a particular therapy used in

a specific clinical context. Ideally, unpublished trials should be identified and included to avoid publication bias (i.e., missing “negative”

trials that may not be published). Furthermore, the best meta-analyses

obtain and analyze individual patient-level data from all trials rather

than using only the summary data from published reports. Nonetheless, not all published meta-analyses yield reliable evidence for a particular problem, so their methodology should be scrutinized carefully

to ensure proper study design and analysis. The results of a well-done

meta-analysis are likely to be most persuasive if they include at least

several large-scale, properly performed randomized trials. Metaanalysis can especially help detect benefits when individual trials are

inadequately powered (e.g., the benefits of streptokinase thrombolytic

therapy in acute MI demonstrated by ISIS-2 in 1988 were evident by

the early 1970s through meta-analysis). However, in cases in which the

available trials are small or poorly done, meta-analysis should not be

viewed as a remedy for deficiencies in primary trial data or trial design.

Meta-analyses typically focus on summary measures of relative

treatment benefit, such as odds ratios or relative risks. Clinicians should

also examine what absolute risk reduction (ARR) can be expected from

the therapy. A metric of absolute treatment benefit that is frequently

reported is the number needed to treat (NNT) to prevent one adverse

outcome event (e.g., death, stroke). NNT should not be interpreted

literally as a causal statement. NNT is simply 1/ARR. For example, if

a hypothetical therapy reduced mortality rates over a 5-year follow-up

by 33% (the relative treatment benefit) from 12% (control arm) to 8%

(treatment arm), the ARR would be 12% – 8% = 4% and the NNT would

be 1/.04, or 25. This does not mean literally that 1 patient benefits and

24 do not. However, it can be conceptualized as an informal measure

of treatment efficiency. If the hypothetical treatment was applied to a

lower-risk population, say, with a 6% 5-year mortality, the 33% relative

treatment benefit would reduce absolute mortality by 2% (from 6%

to 4%), and the NNT for the same therapy in this lower-risk group of

patients would be 50. Although not always made explicit, comparisons

of NNT estimates from different studies should account for the duration

of follow-up used to create each estimate. In addition, the NNT concept assumes a homogeneity in response to treatment that may not be

accurate. The NNT is simply another way of summarizing the absolute

treatment difference and does not provide any unique information.

■ CLINICAL PRACTICE GUIDELINES

Per the 1990 Institute of Medicine definition, clinical practice guidelines are “systematically developed statements to assist practitioner

and patient decisions about appropriate health care for specific clinical

circumstances.” This definition emphasizes several crucial features of

modern guideline development. First, guidelines are created by using

the tools of EBM. In particular, the core of the development process is

a systematic literature search followed by a review of the relevant peerreviewed literature. Second, guidelines usually are focused on a clinical

disorder (e.g., diabetes mellitus, stable angina pectoris) or a health care

intervention (e.g., cancer screening). Third, the primary objective of

guidelines is to improve the quality of medical care by identifying care

practices that should be routinely implemented, based on high-quality

evidence and high benefit-to-harm ratios for the interventions. Guidelines are intended to “assist” decision-making, not to define explicitly

what decisions should be made in a particular situation, in part because

guideline-level evidence alone is never sufficient for clinical decisionmaking (e.g., deciding whether to intubate and administer antibiotics

for pneumonia in a terminally ill individual, in an individual with

dementia, or in an otherwise healthy 30-year-old mother).

Guidelines are narrative documents constructed by expert panels

whose composition often is determined by interested professional

organizations. These panels vary in expertise and in the degree to

which they represent all relevant stakeholders. The guideline documents consist of a series of specific management recommendations, a

summary indication of the quantity and quality of evidence supporting

each recommendation, an assessment of the benefit-to-harm ratio for

the recommendation, and a narrative discussion of the recommendations. Many recommendations simply reflect the expert consensus of

the guideline panel because literature-based evidence is insufficient

or absent. A recent examination of this issue in cardiovascular guidelines showed that <15% of guideline recommendations were based

on the highest level of clinical trial evidence, and this proportion had

not improved in 10 years despite a substantial number of trials being

conducted and published. The final step in guideline construction is

peer review, followed by a final revision in response to the critiques

provided.

Guidelines are closely tied to the process of quality improvement in

medicine through their identification of evidence-based best practices.

Such practices can be used as quality indicators. Examples include the

proportion of acute MI patients who receive aspirin upon admission to

a hospital and the proportion of heart failure patients with a depressed

ejection fraction treated with an ACE inhibitor.

CONCLUSIONS

Thirty years after the introduction of the EBM movement, it is tempting to think that all the difficult decisions practitioners face have been

or soon will be solved and digested into practice guidelines and computerized reminders. However, EBM provides practitioners with an

ideal rather than a finished set of tools with which to manage patients.

Moreover, even with such evidence, it is always worth remembering

that the response to therapy of the “average” patient represented by

the summary clinical trial outcomes may not be what can be expected

for the specific patient sitting in front of a provider in the clinic or

hospital. In addition, meta-analyses cannot generate evidence when

there are no adequate randomized trials, and most of what clinicians

confront in practice will never be thoroughly tested in a randomized

trial. For the foreseeable future, excellent clinical reasoning skills and

experience supplemented by well-designed quantitative tools and a

keen appreciation for the role of individual patient preferences in their

health care will continue to be of paramount importance in the practice

of clinical medicine.

■ FURTHER READING

Croskerry P: A universal model of diagnostic reasoning. Acad Med

84:1022, 2009.

Dhaliwal G, Detsky AS: The evolution of the master diagnostician.

JAMA 310:579, 2013.


30PART 1 The Profession of Medicine

■ DISEASE NOSOLOGY AND PRECISION MEDICINE

Modern disease nosology arose in the late nineteenth century and

represented a clear departure from the holistic, limited descriptions

of disease dating to Galen. In this rubric, the definition of any disease

is largely based on clinicopathologic observation. As the correlation

between clinical signs and symptoms with pathoanatomy required

autopsy material, diseases tended to be characterized by the end organ

in which the primary syndrome was manifest and by late-stage presentations. Morgagni institutionalized this framework with the publication

of De Sedibus et Causis Morborum per Anatomen Indagatis in 1761, in

which he correlated the clinical features of patients with more than 600

autopsies at the University of Padua, demonstrating an anatomic basis

for disease pathophysiology. Clinicopathologic observation served as

the basis for inductive generalization coupled with the application of

Occam’s razor in which disease complexity was reduced to its simplest

possible form. While this approach to defining human disease has held

sway for over a century and facilitated the conquest of many diseases

previously considered incurable, overly inclusive and simplified Oslerian diagnostics suffer from significant shortcomings. These include,

but are not limited to, failure to distinguish the underlying etiology of

different diseases with common pathophenotypes. For example, many

different diseases can cause end-stage kidney disease or heart failure.

Over time, the classification of neurodegenerative disorders or lymphomas, as well as many other diseases, is becoming more refined and

precise as the underlying etiologies are identified. These distinctions

are important for providing predictable prognostic information for

individual patients with even highly prevalent diseases. Additionally,

therapies may be ineffective owing to a lack of understanding of the

often subtle molecular complexities of specific disease drivers.

5 Precision Medicine and

Clinical Care

 The Editors

Fanaroff AC et al: Levels of evidence supporting American College

of Cardiology/American Heart Association and European Society of

Cardiology Guidelines, 2008-2018. JAMA 321:1069, 2019.

Hunink MGM et al: Decision Making in Health and Medicine: Integrating Evidence and Values, 2nd ed. Cambridge, Cambridge University

Press, 2014.

Kahneman D: Thinking Fast and Slow. New York, Farrar, Straus and

Giroux, 2013.

Kassirer JP et al: Learning Clinical Reasoning, 2nd ed. Baltimore,

Lippincott Williams & Wilkins, 2009.

Mandelblatt JS et al: Collaborative modeling of the benefits and

harms of associated with different U.S. breast cancer screening strategies. Ann Intern Med 164:215, 2016.

Monteior S et al: The 3 faces of clinical reasoning: Epistemological

explorations of disparate error reduction strategies. J Eval Clin Pract

24:666, 2018.

Murthy VK et al: An inquiry into the early careers of master clinicians. J Grad Med Educ 10:500, 2018.

Richards JB et al: Teaching clinical reasoning and critical thinking:

From cognitive theory to practical application. Chest 158:1617, 2020.

Royce CS et al: Teaching critical thinking: A case for instruction in

cognitive biases to reduce diagnostic errors and improve patient

safety. Acad Med 94:187, 2019.

Saposnik G et al: Cognitive biases associated with medical decisions: A

systematic review. BMC Med Inform Decis Mak 16:138, 2016.

Schuwirth LWT et al: Assessment of clinical reasoning: three evolutions of thought. Diagnosis (Berl) 7:191, 2020.

Beginning in the mid-twentieth century, the era of molecular medicine offered the idealized possibility of identifying the underlying

molecular basis of every disease. Using a conventional reductionist

paradigm, physician-scientists explored disease mechanism at everincreasing molecular depth, seeking the single (or limited number of)

molecular cause(s) of many human diseases. Yet, as effective as this

now conventional scientific approach was at uncovering many disease

mechanisms, the clinical manifestations of very few diseases could be

explained on the basis of a single molecular mechanism. Even knowledge of the globin β chain mutation that causes sickle cell disease does

not predict the many different manifestations of the disease (stroke syndrome, painful crises, and hemolytic crisis, among others). Clearly, the

profession had expected too much from oversimplified reductionism

and failed to take into consideration the extraordinary biologic variety

and its accompanying molecular and genetic complexity that underpin both normal and pathologic diversity. The promise of the Human

Genome Project provided new tools and approaches and unleashed

efforts to identify a monogenic, oligogenic, or polygenic cause for every

disease (allowing for environmental modulation). Yet, once again,

disappointment reigned as the pool of genomes expanded without the

expected revelations (aside from rare variants). The arc of progressive

reductionism (as illustrated for tuberculosis in Fig. 5-1) in refining and

explaining disease reached a humbling plateau, revealing the need for

new approaches to understand better the etiology, manifestations, and

progression of most diseases. The stage was set for a return to holism.

However, in contrast to the holism of ancient physicians, we adopted one

that is integrative, taking genomic context into account in all dimensions.

In the course of elaborating this complex pathobiologic landscape,

disease definition must become more precise and progressively more

individualized, setting the stage for what we term precision medicine.

Oversimplification of phenotype is a natural outgrowth of the observational scientific method. Categorizing individuals as falling into

groups or clusters that are reasonably similar simplifies the task of the

diagnostician and also facilitates the application of “specific” therapies

more broadly. Biomedicine has been viewed as less quantitative and

precise than other scientific disciplines, with biologic and pathobiologic diversity (biologic “noise”) viewed as the norm. Thus, distilling

such observational complexity to a fundamental group of symptoms

or signs that are reasonably invariant across a group of sick individuals

has served as the basis for the approach to disease and its treatment

since the earliest days of medicine. This approach to diagnosis and

therapy has remained in place into the twenty-first century, serving

as the basis for the development of standard diagnostic tests and of

broadly applied drug therapies. Targeting larger groups of patients

is efficient when applied to large populations. As successful as this

approach has been in advancing medical care, it is important to point

out its limitations, which include significant predictive inaccuracies

and sizeable segments of the disease population who do not respond to

the most “effective” drugs (upward of 60% by some estimates). Clearly,

a more nuanced approach to diagnosis and therapy is required to

achieve better prognostic and therapeutic outcomes.

Turning first to phenotype, astute clinicians know full well the subtle and vivid differences in presentation that are often manifest among

individuals with the same disease. In some cases, these differences in

pathophenotype lead to new subclassifications of the disease, such as heart

failure with preserved ejection fraction versus heart failure with reduced

ejection fraction. Often, these relatively crude efforts at making diagnoses

more precise are driven by new technologies or new ways of applying

established technologies. In other cases, differences in pathophenotype

are more subtle, not necessarily clinically apparent, and often driven by

measures of endophenotype, such as distinctions among vasculitides facilitated by refinements in serologies or immunophenotyping. The impetus

to create these subclasses of disease is largely determined by the need to

improve prognosis and apply more precise and effective therapies. Based

on these guiding principles, many experienced clinicians will argue—and

rightly so—that they have been practicing personalized, precision medicine throughout their careers: they characterize each patient’s illness in

great detail, and choose therapies that respect and are guided by those individualized clinical and laboratory features, limited though they may be.


Precision Medicine and Clinical Care

31CHAPTER 5

For many diseases, genomic variation, whether inherited or acquired,

provides opportunities to refine diagnostic precision with even greater

fidelity and predictive accuracy. For this reason, the field of precision medicine has now entered a new era that couples the molecular

reductionism of the last century with an integrative, systems-level

understanding of the basis for pathophenotype. Equally important,

modern genomics has established that genomic context, sometimes

referred to as modifier genes, is distinctive for each individual person;

hence, understanding that context provides the insight necessary to

predict how a primary disease driver or drivers may manifest a clinical

pathophenotype—e.g., why some individuals with sickle cell anemia

will develop stroke, while others will develop acute chest syndrome.

This concept that primary genetic and/or environmental drivers of a

disease differentially affect disease expression based on an individual’s

unique genomic context serves as the ultimate basis for much of what we

denote as precision medicine.

To develop a precision medicine strategy for any disease, the clinician needs to be aware of two important, confounding principles. First,

patients with different diseases can manifest similar pathophenotypes,

i.e., convergent phenotypes. Examples of this principle include the

hypertrophied myocardium found in hypertrophic cardiomyopathy,

infiltrative cardiomyopathies, critical aortic stenosis, and untreated,

long-standing hypertension; and the thrombotic microangiopathy

found in malignant hypertension, scleroderma renal crisis, thrombotic thrombocytopenic purpura, eclampsia, and antiphospholipid

syndrome. Second, patients with the same basic disease can manifest

very different pathophenotypes, i.e., divergent phenotypes (Chap. 466).

Examples of this principle include the different clinical manifestations

of cystic fibrosis or sickle cell disease and the incomplete penetrance of

many common genetic diseases. These common presentations of different diseases and different presentations of the same disease are both

a consequence of genomic context coupled with unique exposures over

an individual’s lifetime (Fig. 5-2). Understanding the interplay among

these many complex molecular determinants of disease expression is

essential for the success of precision medicine.

Given the complexity of the genomic and environmental context of

an individual, one must ask the question: How precise do we need to

be in order to practice effective precision medicine? Complete knowledge of a person’s comprehensive genome (DNA, gene expression,

mitochondrial function, proteome, metabolome, posttranslational

modification of the proteome, and metagenome, among others) and

quantitative assessments of environmental and social history are not

possible to acquire; yet, this shortcoming does not render the general

problem intractable. Owing to the fact that the molecular networks

that govern phenotype are overdetermined (i.e., redundant) and that

there are primary drivers of disease expression that are modified in a

weighted way by other genomic features of an individual, the practice

of precision medicine can be realized without complete knowledge

of all dimensions of the genome. Examples of how best to realize this

strategy are discussed later in this chapter.

■ REQUIREMENTS FOR PRECISION MEDICINE

The essential elements of any precision medicine effort include phenotyping, endophenotyping (defining the characteristics of a disorder

that are not readily observable), and genomic profiling (Fig. 5-3). While

subtle distinctions among individuals with the same disease are well

known to clinicians, formalizing these nuanced differences is critical

for achieving more precise phenotypes. Deep phenotyping requires a

21st century

– The challenge of reassembly

Late 20th century

– Lesions detected

 at molecular level

– Interferon testing

Late 19th century

– Lesions of cells

 and microbes

– M. tuberculosis

 identification

Early 19th century

– Lesions of organs and tissues

– Caseating granulomata

18th century

– Sick person

– Phthisis

FIGURE 5-1 Arc of reductionism in medicine. (From JA Greene, J Loscalzo. Putting the patient back together–social medicine, network medicine, and the limits of

reductionism. N Engl J Med 377:2493, 2017. Copyright © 2017 Massachusetts Medical Society. Reprinted with permission from Massachusetts Medical Society.)


32PART 1 The Profession of Medicine

– Mutations in >11 sarcomeric proteins

 (>1400 variants)

– Hypertensive heart disease

– Aortic stenosis

– Fabry’s disease

– Pompe’s disease

– TTP

– HUS

– Malignant hypertension

– Scleroderma renal crisis

– Preeclampsia/eclampsia

– HELLP

– Antiphospholipid syndrome

A

– Syncope

– Heart failure

– Angina pectoris

– Venous thromboembolism

– Thrombotic stroke

– Mesenteric thrombosis

– Coronary thrombosis

– Livedo reticularis

Hypertrophic cardiomyopathy

Thrombotic microangiopathy

Aortic stenosis

Antiphospholipid syndrome

B

FIGURE 5-2 Convergent and divergent phenotypes. Examples of the former (A) include

hypertrophic cardiomyopathy and thrombotic microangiopathy, and examples of the latter, and

(B) include aortic stenosis and antiphospholipid syndrome, each of which can have several distinct

clinical presentations. HELLP, hemolysis, elevated liver enzymes, and a low platelet count; HUS,

hemolytic-uremic syndrome; TTP, thrombotic thrombocytopenic purpura.

Environmental exposures

Epigenomic modifications

Single-cell

analyses

Integration: Network of Networks

Post-translational modifications

Microbiome interactions

Genomic

network

Transcriptomic

network

Proteomic

network

Metabolomic

network

Psychosocial

network

Clinical

phenotypes

Improved

understanding of

(patho)biology

Complex disease

reclassification

Disease prevention

Network-targeted

therapies

Precision

medicine

HO

O

O

OH

FIGURE 5-3 Universe of precision medicine. The totality of precision medicine incorporates multidimensional biologic networks, the integration of which leads to a

network of networks whose components interact with each other and with environmental exposures to yield a distinctive phenotype or pathophenotype. (Reproduced with

permission from LY-H Lee, J Loscalzo: Network medicine in pathobiology. Am J Pathol 189:1311, 2019.)

detailed history, including family history and environmental exposures, as well as relevant (physiologic) functional

studies and imaging, including molecular imaging where

appropriate. Biochemical, immunologic, and molecular

tests of body fluids provide additional detail to the overall

phenotype. Importantly, these objective laboratory tests

together with functional studies compose an assessment of

the endophenotype (or endotype) of an individual, refining the overall discriminant power of the evaluation. One

additional concept that has gained traction in recent years

is the notion of orthogonal phenotyping, i.e., assessing clinical, molecular, imaging, or functional (endo)phenotypes

seemingly unrelated to the clinical presentation. These

features further enhance the ability to distinguish (sub)

phenotypes and derive from the fact that diseases can be

subtly (subclinically) manifest in organ systems different

from that in which the primary symptoms or signs are

expressed. While some diseases are well known to affect

multiple organ systems (e.g., systemic lupus erythematosus)

and in many cases involvement of those many systems is

assessed at initial diagnosis, such is not the case for most

other diseases. As we begin to understand the differences in the organ-specific expression of genomic variants

that drive or modify disease, it is becoming increasingly

apparent that orthogonal—or more appropriately, unbiased

comprehensive—phenotyping should become the norm.

Genomic profiling must next be coupled to detailed

phenotyping. The complex levels of genomic assessment continue to mature and include DNA sequencing

(exomic, whole genome), gene expression (mRNA and

protein expression), and metabolomics. In addition, the

epigenome, the posttranslationally modified proteome,

and the metagenome (the personal microbiome of an

individual) are gaining traction as additional elements

of comprehensive genomics (Chap. 483). Not all of these

genomic features are yet available for clinical laboratory

testing, and those that are available are largely confined

to blood testing. While DNA sequencing using whole

blood would generally apply to any organ-based disease,


Precision Medicine and Clinical Care

33CHAPTER 5

gene expression, metabolomics, and epigenetics are often tissue specific. As tissue specimens cannot always or easily be obtained from the

organ of interest, attempts at correlating whole-blood mRNA, protein,

or metabolite profiles with those of the involved organ are critical for

precise prognostics and therapeutic choices. In many cases, systemic

consequences to an organ-specific disease (e.g., systemic inflammatory

responses in individuals with atherosclerosis) can be ascertained and

may provide useful prognostic information or therapeutic strategies.

These biomarker signatures are the subject of ongoing discovery and

have provided useful guidance toward improved diagnostic precision

in many diseases. However, in many diseases, the correlations between

these plasma or blood markers and organ-based diseases are weak,

indicating a need to analyze each condition and each resulting signature before applying it to clinical decision-making. It is important to

note that one of the key determinants of the functional consequences

of a genetic variant believed to drive a disease phenotype is not simply

its expression in a tissue of interest but, more importantly, the coexpression of protein binding partners in that same tissue comprising

specific (dys)functional pathways that govern phenotype (Fig. 5-4). An

alternative strategy currently under investigation is the conversion of

induced pluripotent stem cells from a patient into a cell type of interest

for gene expression or metabolomics study. As rational as this approach

seems from first principles, it is important to note that gene expression

patterns in these induced, differentiated cell types are not completely

consonant with their native counterparts, offering often limited additional information at potentially great additional expense.

While phenotype features of many chronic diseases are assessed over

time, genomic features tend to be limited to single time point sampling.

Time trajectories are extremely informative in precision genotyping and

phenotyping, with gene expression patterns and phenotypes changing

over time in different ways among different patients with the same

overarching phenotype. Cost, feasible sampling frequency, predictive

power, and therapeutic choices will all drive the optimal strategy for the

acquisition of timed samples in any given patient; however, with continued cost reduction in genomics technologies, this limitation may be

progressively mitigated and clinical application may become a reality.

One important class of diseases that does not have most of these

limitations in genomic profiling is cancer. Cancers can be (and are)

sampled (biopsied) frequently to monitor temporal changes in the

somatically mutating oncogenome and its consequences for the limited number of well-defined oncogenic driver pathways (Chap. 68).

A unique limitation of cancer in this regard, however, is that the

frequency of somatic mutations over time (and, especially, with treatment) is great and the functional consequences of many of these mutations unknown. Equally important, assessment of single-cell mRNA

sequencing patterns demonstrates great variability between apparently

similar cells, challenging functional interpretation. Lastly, in solid

tumors, stromal cells interact in a variety of ways (e.g., metabolically)

with the associated malignant cells, and their gene expression signatures are also modified by the changing somatic mutational landscape

of the primary malignancy. Thus, while much more information can be

obtained over time in most cancer patients, the interpretation of these

rich data sets continues to remain largely semi-empirical.

The possibility of identifying specific therapeutic targets remains a

major goal of precision medicine. Doing so requires more than simple DNA sequencing and must include analysis of some level of gene

expression, ideally in the involved organ(s). In addition to demonstrating the expression of a variant protein in the organ, one must ideally

also demonstrate its functional consequences, which requires ascertaining the expression of binding partner proteins and the functional

pathways they comprise. To achieve this goal, a variety of approaches

have been tried, one of the most successful of which is the construction

of the protein-protein interactome (the interactome), which is a comprehensive network map of the protein-protein interactions in a cell or

organ of interest (Chap. 486). This template provides information on

the subnetworks that govern a disease phenotype (disease modules),

which can be further individualized by incorporating individual variants and differentially expressed proteins that are patient specific. This

type of analysis leads to the creation of an individual “reticulome” or

reticulotype, which links the genotype to the phenotype of an individual

(Fig. 5-5). Using this approach, one can identify potential drug targets

in a rational way or can even repurpose existing drugs by demonstrating the proximity of a known drug target to a disease module of interest

(Fig. 5-6). For example, in multicentric Castleman’s disease, a disorder

of unclear etiology, recognition that the PI3K/Akt/mTOR pathway is

highly activated led to trials with an existing, approved drug, sirolimus.

Precision medicine offers additional opportunities for optimizing the

utilization of a drug by assessing the individualized pharmacogenomics of its disposition and metabolism, as demonstrated for the adverse

consequences of variants in TPMT on azathioprine metabolism and in

CYP2C19 on clopidogrel metabolism (Chap. 68).

■ EXAMPLES OF PRECISION MEDICINE

APPLICATIONS

The field of precision medicine did not appear abruptly in medical

history but, rather, evolved gradually as clinicians became more aware

of differences among patients with the same disease. With the advent

of modern genomics, in the ideal situation, these phenotype differences

can now be mapped to genotype differences. Thus, we can consider

precision medicine from the perspective of the pregenomic era and the

postgenomic era. Pregenomic precision medicine was applied to many

diseases as therapeutic classes expanded for those disorders. A prime

example of this approach is in the field of heart failure, where diuretics,

digoxin, beta blockers, afterload-reducing agents, venodilators, reninangiotensin-aldosterone inhibitors, and brain natriuretic peptide

(nesiritide) are commonly used in some combination for most patients.

The choice of agents is governed by the evidence basis for their use,

but tailored to the primary pathophysiologic phenotypes manifest in a

patient, such as congestion, hypertension, and impaired contractility.

These treatments were developed in the latter half of the last century

based on empiric observation, reductionist experiments of specific

pathways believed to be involved in the pathophysiology, and clinical

response in prospective trials. As phenotyping became more refined

(e.g., echocardiographic assessments of ventricular function and tissue

Doppler characterization of ventricular relaxation), the syndrome was

subclassified into heart failure with reduced ejection fraction and heart

failure with preserved ejection fraction, the latter of which does not

respond well to any of the classes of therapeutic agents currently available. In the postgenomic era, ever more refined and detailed methods

are under investigation to characterize pathophenotypes as well as

genotypes, which may then be matched to the idealized combination

of therapeutic classes of agents.

Pulmonary arterial hypertension is another disease for which definitive therapies straddle the pre- and postgenomic eras of precision

medicine. Prior to the 1990s, there were no effective therapies for this

highly morbid and lethal condition. With the advent of molecular and

biochemical characterization of vascular abnormalities in individuals

with established disease, however, therapies with agents that restored

normal vascular function improved morbidity and mortality. These

included calcium channel blockers, prostacyclin congeners, and endothelin receptor antagonists. As genomic characterization of the disease

has progressed over the past two decades, there is increasing recognition

of distinct genotypes that yield unique phenotypes (Chap. 283), such as

the demonstration of a primarily fibrotic endophenotype governed by

the (oxidized) scaffold protein NEDD-9 and its aldosterone-dependent,

TGF-β-independent enhancement of collagen III expression. This

approach will continue to evolve as therapies become more effective

(e.g., for perivascular fibrosis) and therapeutic choices better targeted

to individual patients.

Precision genomics has also led to a new classification of the dementias, conditions previously thought to have a single cause with varied

clinical expression. These disorders can now be categorized based on

the genes and pathways involved and the site where aggregated proteins

first form and then spread in the nervous system. For example, the

varied clinical presentations of frontotemporal dementia, including

progressive aphasia, behavioral disturbances, and dementia with amyotrophic lateral sclerosis, can now be linked to specific genotypes and

susceptible cells (Chap. 432). In prion diseases, the clinical phenotype


No comments:

Post a Comment

اكتب تعليق حول الموضوع

Popular Posts

Popular Posts

Popular Posts

Popular Posts

Translate

Blog Archive

Blog Archive

Featured Post

  ABSTRACT INTRODUCTION: Direct oral anticoagulants (DOACs) demonstrated similar efficacy and lower risk of intracranial hemorrhage than war...