Widespread public and scientific interest in promoting the care and well-being of animals used for toxicity testing has given rise to improvements in animal welfare practices and views over time, as well as laws and regulations that support means to reduce, refine, and replace animal use (known as the 3Rs) in certain toxicity studies. One way these regulations continue to achieve their aim is by promoting the research, development, and application of alternative testing approaches to characterize potential toxicities either without animals or with minimal use. An important example of an alternative approach is the use of computational toxicology models. Along with the potential capacity to reduce or replace the use of animals for the assessment of particular toxicological endpoints, computational models offer several advantages compared to in vitro and in vivo approaches, including cost-effectiveness, rapid availability of results, and the ability to fully standardize procedures. Pharmaceutical research incorporating the use of computational models has increased steadily over the past 15 years, likely driven by the motivation of companies to screen out toxic compounds in the early stages of development. Models are currently available to aid in the prediction of several important toxicological endpoints, including mutagenicity, carcinogenicity, eye irritation, hepatotoxicity, and skin sensitization, albeit with varying degrees of success. This review serves to introduce the concepts of computational toxicology and evaluate their role in the safety assessment of compounds, while also highlighting the application of in silico methods in the support of the goal and vision of the 3Rs.

You do not currently have access to this article.