Human immunodeficiency virus (HIV) type 1 drug-resistance testing is quickly moving from the research laboratory to the clinic as data defining its utility as a prognostic indicator of response to therapy become available. In July 1998, a panel of the International AIDS Society-USA did not recommend the widespread application of resistance testing, but by May 2000 this panel endorsed and recommended the incorporation of resistance testing in patient-care management. Considerable data supporting the use of drug-resistance testing have now been published or presented at international conferences. These data strongly suggest that drug-resistance testing is of considerable value in many clinical settings. Prospective trials of resistance testing as a clinical management tool are still ongoing, and the long-term benefits still need to be evaluated. Nevertheless, early results from several studies showed a significantly better virological response when treatment regimens were based on resistance-testing data, rather than on the standard of care. HIV drug-resistance testing is also useful as a tool for new antiretroviral drug design and development, as well as for monitoring the spread of primary HIV drug resistance.
HIV-1 variants that are resistant to each of the available nucleoside reverse-transcriptase inhibitors (NRTIs), nonnucleoside reverse-transcriptase inhibitors (NNRTIs), and protease inhibitors (PIs) have been observed (for reviews, see [1–5]). They also have been described during the clinical development of several of the most recent drugs to become commercially available [6–8]. In recent years, it has become clear that viral resistance and clinical failure are closely linked in HIV-infected patients [9–13].
To date, HIV-resistance testing has been used primarily as a research tool in clinical trials, to explore the mechanisms of drug failure. The rate of appearance and extent of viral drug resistance will vary from individual to individual, depending on the drug regimen used, the duration of therapy, the individual's treatment history, the adherence to therapy, and drug pharmacokinetics. High-level resistance to lamivudine or to NNRTIs can be conferred by single point mutations in the HIV reverse transcriptase (RT) gene, whereas resistance to PIs or other RT inhibitors requires a number of concurrent mutations in the HIV protease and RT genes, respectively.
It seems logical to extend the methodology of HIV-resistance testing to routine clinical management, but, until recently, there has been only limited data demonstrating the utility of testing as a patient management tool. Thus a consensus statement from an international panel, published in July 1998, could not recommend resistance testing in routine clinical practice (except in very limited circumstances), because of the lack of validation, standardization, and definition of the role of testing . The panel recommended that HIV RNA levels and CD4+ T cell counts guide drug therapy and that, in addition to drug resistance, issues such as adherence, drug potency, and pharmacokinetics also should be considered as potential causes of treatment failure . Since the publication of this consensus statement, a number of studies have indicated that resistance testing gives additional clinically useful information. The development of commercial resistance assays also has helped address the problem of lack of standardization. A second consensus statement, published in early 2000, which contained updated recommendations on antiretroviral therapy, recognized the value of HIV drug-resistance testing, when interpreted correctly . Finally, in May 2000, the International AIDS Society-USA panel recommended the incorporation of antiretroviral drug-resistance testing in patient management . The clinical settings for which testing was recommended included first- or multiple-regimen failure and pregnancy. However, resistance testing may prove useful in other situations, such as primary HIV infection, and therefore should be considered , especially when the source of infection is known and was exposed to antiretroviral drugs. In addition to aiding the clinical management of patients on antiretroviral therapy, HIV-resistance testing will provide important information in tracking the spread of primary resistance and may be useful in guiding future antiretroviral drug discovery and development.
There are 2 complementary approaches for the assessment of antiretroviral drug resistance in patients: genotypic testing and phenotypic testing.
Genotypic testing. Genotypic testing detects mutations in the HIV genome that correlate with resistance to antiretroviral drugs. More than 100 resistance-associated mutations have been described . Several methods exist to detect these variants, and each method has advantages and limitations. A brief summary of methods of genotypic testing is presented in table 1 (for further details, see  and references therein).
In general, genotypic testing is faster, more accessible, and less costly than phenotypic testing; however, it has a number of inherent limitations. The detection of mutation(s) provides an indirect measure of drug susceptibility and, therefore, is informative only when the particular sequence changes have already been characterized and correlated with drug resistance. Resistance-mutation patterns are complex and require expert interpretation. The effect of mutations on resistance phenotype is not uniform, and the mutations sometimes interact in such a way that combinations of mutations can yield viruses with higher or lower resistance than otherwise would be expected . Furthermore, resistance mechanisms may be more complex during combination therapy than during monotherapies . Thus, one cannot simply add the number or type of mutations to determine the degree of drug resistance. In fact, some genotyping methods do not capture sufficient information to allow predictions about phenotype. Under ideal circumstances, the complete coding sequence of the target genes (or more) should be determined.
Because of the ever-expanding body of knowledge and the development of new antiretroviral drugs, interpretation of genotypic data will remain a constant challenge. Expert systems based on large databases that correlate genotype and phenotype therefore are required for a sensible interpretation of viral genotype. These databases are continually expanding [18, 19], although to date they are mostly proprietary. The interpretation of genotypic data remains an area in which standardization does not exist. Although different laboratories may yield comparable genotypes for a given sample, a particular genotype may be interpreted differently by each laboratory. In the absence of convincing corresponding phenotypic data, genotypic data should be used cautiously.
In addition, there are other, less significant limitations to genotyping. A multitude of HIV variants are always present in small amounts within a clinically derived sample, and most of the methods presented in table 1 do not detect minor variants. Only differential PCR analysis or the sequencing of cloned material can detect variants present at very low prevalence (<5%) within an individual, and these methods are not practical for routine use. Direct sequencing of PCR products will detect variants that form more than ∼20% of the virus population. Furthermore, most of the genotypic assays developed to date preferably detect HIV-1 subtype B. Finally, because the predominant plasma virus can revert rapidly to “wild type” when therapy ends [20, 21], potentially misleading results may be obtained if patients are tested after having stopped all therapy.
Phenotypic testing. Unlike genotypic testing, phenotypic testing directly measures the ability of HIV to grow in the presence of a drug. Some phenotypic assays rely on the coculture of peripheral blood mononuclear cells from HIV- infected individuals, whereas others are based on recombinant DNA techniques [2, 18, 22–24]. In the latter techniques, a region of the viral gene is amplified by RT-PCR, directly from plasma isolates, and recombined or cloned into a “standard virus genome” that lacks the region of interest. These assays are more amenable to scale-up and automation and are now commercially available [18, 24].
Phenotypic assays allow the direct measurement of drug susceptibility by using a dose-response curve; for example, the amount of drug required to inhibit viral replication by 50% yields the IC50 value. The susceptibility of the virus is then compared with that of a “standard” laboratory strain of HIV-1. In general, phenotypic testing is somewhat more costly and requires a longer turnaround time than genotypic testing but is considered by most to represent the “gold standard” for drug-resistance testing. Preliminary data suggest that, in general, the various drug-resistance testing methods correlate reasonably well with one another [18, 25–28], although further comparative studies are needed. New approaches to resistance testing continue to be developed and are based on direct analysis of plasma enzymatic activity  or on novel cell lines .
Applications of Resistance Testing in Clinical Settings
Retrospective drug-resistance testing. Until recently, the value of drug-resistance testing as a prognostic tool has been difficult to evaluate in a clinical setting because of the lack of prospective studies. However, several retrospective studies have assessed genotypic and/or phenotypic testing as a predictive measure of virological response both in treatment-naive patients and drug-experienced patients. Zidovudine was the first antiviral drug for which resistance was well documented at the genotypic and phenotypic levels (for a review, see ). A key substitution conferring zidovudine resistance occurs at codon 215 of the HIV RT gene. The presence of the 215Y mutation is a predictor of poor response to zidovudine , to zidovudine/didanosine (with or without nevirapine) , and, interestingly, to didanosine alone . A more recent genotypic and phenotypic study of individuals enrolled in trials of abacavir demonstrated that virus with either no mutations or a single mutation was susceptible to abacavir. Virus with multiple mutations that was resistant to ≥3 NRTIs was generally not susceptible to abacavir [7, 33]. The results of baseline genotypic and phenotypic tests of abacavir-treated patients appeared to predict the viral load response over a 24-week period [7, 33].
The potential value of resistance testing also has been assessed in several studies that included PIs and NNRTIs [24, 34–38]. Baseline phenotype was found to predict the virological outcome at week 16 in individuals who failed indinavir combination therapy and who were treated with various quadruple salvage therapies . In ritonavir/saquinavir therapy of individuals for whom ≥1 prior PI regimen had failed, baseline viral genotype was found to provide additional information, beyond patient treatment history and other available information; protease mutations were the strongest predictor of virological response . Another study examined patients receiving ritonavir/saquinavir (and other agents) and also found that baseline ritonavir/saquinavir genotype and/or phenotype were highly predictive of future virological response to that dual PI combination , despite the confounding effects of other drugs. Importantly, as little as a 4-fold decrease in susceptibility to ritonavir or saquinavir was sufficient to prevent optimal response to therapy. In highly antiretroviral therapy-experienced patients receiving nelfinavir, a statistically significant relationship was observed between the number of mutations at key codons within the protease gene, the phenotypic PI susceptibility, and the response to nelfinavir in vivo . A similar study of salvage therapy including nelfinavir found that the virological response was best predicted by the number of baseline RT and PR mutations . A common theme from these data is that, although drug resistance virtually guarantees failure of therapy, drug susceptibility alone predicts but does not guarantee a favorable virological response. This emphasizes the fact that successful treatment depends on many other factors besides drug resistance or drug susceptibility.
Perhaps the most compelling data on drug resistance is from the Resistance Collaborative Group (personal communication, J. Mellors, University of Pittsburgh and Veterans Affairs Medical Center, Pittsburgh). A reanalysis of all available resistance studies was done using a consistent data analysis plan. The results, presented to the US Food and Drug Administration (FDA) in October 1999 (for details see http://www.fda.gov/ohrms/dockets/ac/99/backgrd/3541b1a.pdf), showed a fairly consistent association between drug resistance and virological outcome. This study has contributed to the incorporation of resistance testing as a requirement for HIV drug approval by the FDA.
Finally, HIV drug-resistance testing may not always help in the selection of drug regimens for patients who have failed multiple combination-therapy regimens [39, 40]. This is primarily due to limitations in the number of drugs available, rather than to the accuracy of the information. A preliminary study of a small number of salvage-therapy patients receiving multiple antiretroviral drugs concurrently indicated that patients with viral genotypes consistent with resistance to all antiretrovirals failed to respond to a regimen consisting of as many as 8 concurrent antiviral drugs . Nevertheless, some patients have achieved undetectable viral load while under multiple-drug therapy, after showing high levels of drug resistance.
Prospective drug-resistance testing. At present, the results of a limited number of prospective studies of HIV drug-resistance testing as a clinical management tool have been published. Several abstracts have been presented, at international conferences, that demonstrate the utility of resistance testing in patient management. A randomized control study that investigated the prospective use of genotyping in the management of anti-HIV therapy (VIRADAPT) showed significant improvement in viral load reduction over 6 months in the genotype arm of the study . Another pilot study, a randomized trial involving patients who had failed a PI-containing triple therapy, showed that, over a 12-week period, antiretroviral management based on plasma genotypic antiretroviral resistance testing in combination with expert interpretation was superior to the conventional approach . A third prospective trial (VIRA 3001) found that, after 16 weeks, phenotypic resistance testing significantly improved virological outcome when used to guide treatment decisions . Some treatment facilities have started using genotyping before designing patients' combination therapies, to include drugs for which no mutations were observed . A clinic cohort study also has been initiated to assist with therapy management for patients who are either initiating or changing antiretroviral therapy regimens . Despite the difficulties associated with the design and implementation of prospective clinical trials of resistance testing, validation trials that should provide much-needed information have been initiated .
Monitoring the Spread of Primary Resistance to Antiretrovirals
The transmission of drug-resistant variants is a significant public health concern and may become increasingly common in certain populations . Since the first documented cases of transmission of zidovudine-resistant isolates , there have been numerous reports indicating that a significant number of primary-infection HIV isolates are already drug resistant. These viruses show resistance not only to NRTIs [49–53] but also to PIs and NNRTIs [54–56]. Not surprisingly, patients with primary resistance often respond less well to antiretroviral therapy [53, 57]. In a German study, nearly 10% of the drug-naive patients were found to have preexisting mutations within the HIV RT gene . On treatment with soft-gel saquinavir, zidovudine, and zalcitabine, the reduction in plasma viral load was significantly less in those individuals than in the rest of the population examined. Recently, the prevalence of reduced susceptibility to antiretroviral drugs, particularly to NNRTIs, in individuals with primary HIV infection has sparked interest in HIV drug-resistance testing for newly infected individuals [58, 59]. In the case of NNRTIs, however, modest decreases in susceptibility were not always associated with poor virological outcome . The clinical significance of low-level resistance to other classes of drugs awaits further investigation.
Use of Resistance Data in Drug Development
It is difficult to design appropriate “salvage therapies” for patients who have failed to respond to >2 or 3 drug regimens. The number of viable treatment options for such HIV-infected individuals is limited in part by the problem of cross-resistance among drugs, particularly within the PI and NNRTI classes [6, 54], and by the number of drugs available. Patients who have received sequential therapies often have virus with multiple concurrent mutations, which can render them resistant not only to currently available drugs but also to drugs that are in clinical development.
It is not a coincidence that many newly developed compounds are less effective against highly mutated viruses: in general, screening for antivirals of a given class uses a wild-type virus or protein as a target. Since many patients have multiple changes in HIV RT and protease genes, it would be somewhat surprising if a compound selected on the basis of its activity against wild-type virus could retain full activity against a distinct target. There is a pressing need to use highly resistant HIV isolates or mutated recombinant HIV proteins in programs to screen and develop drugs, because the patients most at risk of imminent disease progression are also the ones least likely to respond to newly developed agents. Antiretrovirals that appear to be effective against at least a few drug-resistant viruses are now in development. Only routine use of these agents in the clinic will determine whether their early promise is fulfilled.
In the near future, drug-resistance testing will likely become a widely used clinical tool, similar to viral-load testing. The cost of resistance testing may decrease and should be reimbursable by medical insurers. Companies that provide genotypic testing will need to demonstrate the accuracy of their phenotypic interpretations and justify the clinical significance of the specific mutations observed. Pressure for the development of more-sensitive tests also will increase. Eventually, plasma drug levels also may be monitored to relate clinically significant drug levels to phenotypic resistance values. This ultimately will allow the integration of various technologies— such as viral-load testing, drug-resistance testing, and effective drug-level monitoring—to further individualize and optimize antiretroviral therapies.
Until recently, antiretroviral drug-resistance testing was not recommended for routine use but was considered for drug-naive pregnant women or persons with primary infection from a population in which the prevalence of drug resistance is high . Despite its present limitations , HIV drug-resistance testing can already provide valuable information for choosing appropriate antiretroviral combinations and is recommended or should be considered in a wide range of situations . For example, resistance testing after a rebound in viral load will help avoid the use of drugs that are less likely to exert a beneficial effect. The greatest benefit to patients may be derived from the longitudinal assessment of drug resistance as treatment continues over a long period of time. Further standardization and technological advances may soon open the door to the improved use of our limited therapeutic drug arsenal against HIV.