Browsing Posts published in July, 2011

    The introduction of prostate-specific antigen (PSA) testing has resulted in increased diagnoses of early-stage disease. Less is known about the role of short-term ADT in men receiving radiotherapy for these cancers. Accordingly, in 1994, the RTOG opened a large, randomized trial, RTOG 94-08, to evaluate whether adding short-term ADT to radiotherapy would improve survival among patients with nonbulky localized prostate adenocarcinomas and an initial PSA level of 20 ng per milliliter or less.

    Patients with histologically confirmed prostate adenocarcinoma, stage T1b, T1c, T2a, or T2b (according to the 1992 classification of the American Joint Committee on Cancer), and a PSA level of 20 ng per milliliter or less were eligible for this international phase 3 study. Pretreatment evaluation included a digital rectal examination and bone scan. The regional lymph nodes were evaluated surgically by means of lymph-node sampling or clinically by means of lymphangiography or pelvic computed tomography. The Gleason score (the sum of the two most common histologic patterns or grades in a prostate tumor, each of which is graded on a scale of 1 to 5, with 5 indicating the most aggressive pattern) was determined, and tumors were also classified as well differentiated, moderately differentiated, or poorly differentiated. Eligibility criteria included a Karnofsky performance score of 70 or more (on a scale of 0 to 100, with higher scores indicating better performance status), an alanine aminotransferase level that was no more than twice the upper limit of the normal range, no evidence of regional lymph-node involvement or distant metastatic disease, and no previous chemotherapy, radiotherapy, hormonal therapy, cryosurgery, or definitive surgery for prostate cancer. Patients with previous basal-cell or squamous- cell skin carcinomas who had been diseasefree for 2 years or more before study entry, and patients with invasive cancers who had been disease- free for 5 years or more, were eligible if their participation was approved by the study cochairs. The institutional review boards of the participating institutions approved the study protocol, and all patients provided written informed consent. The National Cancer Institute sponsored the study. The drugs were purchased from vendors. No commercial support was provided for this study.

    Study Design
    After stratification according to PSA level (<4 vs. 4 to 20 ng per milliliter), tumor grade (well differentiated, moderately differentiated, or poorly differentiated), and surgical versus clinical documentation of negative regional nodal status, patients were randomly assigned to receive either radiotherapy plus short-term ADT or radiotherapy alone, according to the permuted-block randomization method described by Zelen. The RTOG carried out this trial and was responsible for data collection, statistical analysis, study design, and preparation of the manuscript.

    It is not known whether short-term androgen-deprivation therapy (ADT) before and during radiotherapy improves cancer control and overall survival among patients with early, localized prostate adenocarcinoma.

    From 1994 through 2001, we randomly assigned 1979 eligible patients with stage T1b, T1c, T2a, or T2b prostate adenocarcinoma and a prostate-specific antigen (PSA) level of 20 ng per milliliter or less to radiotherapy alone (992 patients) or radiotherapy with 4 months of total androgen suppression starting 2 months before radiotherapy (radiotherapy plus short-term ADT, 987 patients). The primary end point was overall survival. Secondary end points included disease-specific mortality, distant metastases, biochemical failure (an increasing level of PSA), and the rate of positive findings on repeat prostate biopsy at 2 years.

    The median follow-up period was 9.1 years. The 10-year rate of overall survival was 62% among patients receiving radiotherapy plus short-term ADT (the combined-therapy group), as compared with 57% among patients receiving radiotherapy alone (hazard ratio for death with radiotherapy alone, 1.17; P = 0.03). The addition of short-term ADT was associated with a decrease in the 10-year disease-specific mortality from 8% to 4% (hazard ratio for radiotherapy alone, 1.87; P = 0.001). Biochemical failure, distant metastases, and the rate of positive findings on repeat prostate biopsy at 2 years were significantly improved with radiotherapy plus short-term ADT. Acute and late radiationinduced toxic effects were similar in the two groups. The incidence of grade 3 or higher hormone-related toxic effects was less than 5%. Reanalysis according to risk showed reductions in overall and disease-specific mortality primarily among intermediate- risk patients, with no significant reductions among low-risk patients.

    Among patients with stage T1b, T1c, T2a, or T2b prostate adenocarcinoma and a PSA level of 20 ng per milliliter or less, the use of short-term ADT for 4 months before and during radiotherapy was associated with significantly decreased disease-specific mortality and increased overall survival. According to post hoc risk analysis, the benefit was mainly seen in intermediate-risk, but not low-risk, men.

    In the 1980s, advances in both surgery and radiotherapy for clinically localized prostate cancer led to their acceptance as successful treatments, with considerable reductions in harmful side effects as compared with earlier treatments. In the 1990s, reversible androgen suppression with the use of luteinizing hormone–releasing hormone analogues and oral antiandrogen agents was shown to induce apoptotic regression in androgen- responsive cancers, potentially improving the prospects of local control and the duration of survival free of metastatic disease. Among patients with locally advanced disease, phase 3 clinical trials showed that when added to radiotherapy, long-term treatment with these agents (≥2 years) improved overall survival but also increased toxic effects, including erectile dysfunction and myocardial infarction.5 Short-term androgendeprivation therapy (ADT) could potentially mitigate these toxic effects. A Radiation Therapy Oncology Group (RTOG) phase 3 clinical trial, reported in 1994, showed that short-term ADT administered for 4 months before and during radiation therapy significantly improved local control and disease-free survival among patients with bulky stage T2c to T4 tumors. Other trials have also shown benefits from this approach.

    National efforts are under way to link together large administrative databases to permit hypotheses concerning adverse associations to be tested more rigorously. To enable medical sleuths to detect the initial scent that leads them to track a lowfrequency adverse drug effect, more open and better reporting is needed. But fuller reporting of all adverse experiences without filtering on the basis of statistical significance or perceived causality would result in the publication of more tedious supplemental tables that primarily contribute to information overload. Similarly, Justice Sotomayor noted that such unfiltered information “could bury the shareholder in an avalanche of trivial information.”

    However, in cases in which data showing initially seemingly unimportant imbalances eventually add up to a clear signal of an adverse action of a drug, transparent early reports could reduce the likelihood of litigious arguments concerning who knew what and when. Individual manufacturers should not be in the position of determining what information is considered material for public dissemination. Physicians are also well aware that our noninfallible but important recommendations are based on our best assessments of incomplete data, with levels of evidence ranging from firm to anecdotal. With the finality afforded by the Supreme Court decision in Matrixx, investors can be assured of increased access to statistically nonsignificant information regarding reports of adverse drug experiences. Although the medical establishment lacks legal authority, it could use its standards- setting powers to improve access to the same level of information. The resulting flood of data, though likely to represent biologic noise rather than evidence sufficient to establish even “probable cause,” would contribute to the total mix of available information and might, under some circumstances, influence reasonable prescribers and patients to alter their treatment plans: a sommelier, for example, might consider any report of anosmia to be material.

    Dr. Pfeffer is a physician in the Cardiovascular Division, Department of Medicine, Brigham and Women’s Hospital, and a professor at Harvard Medical School; and Ms. Bowler is a U.S. magistrate judge, U.S. District Court — all in Boston.

    In the recent unanimous decision in Matrixx Initiatives v. Siracusano, the U.S. Supreme Court applied the “fair preponderance of the evidence” standard of proof used for civil matters, in which a particular conclusion is deemed “more likely than not” to be justified. At issue was whether Matrixx had violated federal securities laws by failing to disclose to shareholders sporadic reports of anosmia associated with the use of its Zicam nasal spray before the Food and Drug Administration (FDA) issued a warning about that association in 2009. The question before the Court was not whether the drug caused the loss of smell, but rather whether the company failed to provide material information to the investor plaintiffs that would have led a “reasonable shareholder” to alter his or her investment strategy. The initial trial court was persuaded by the company’s primary argument that the evidence suggesting that its product caused anosmia did not reach statistical significance and therefore should not have been considered material. In upholding the ruling of the appellate court, which had reversed the trial court’s decision, the Supreme Court ruled that whether or not it was considered statistically significant, the information about the seemingly infrequent occurrences of loss of smell after use of the product was indeed material to investors. Speaking for the undivided Court, Justice Sonia Sotomayor also acknowledged that the mere existence of reports of adverse events associated with a drug does not prove causality — but asserted that such a high level of proof did not have to be achieved. Similarly, under the Code of Federal Regulations for the FDA, warnings and precautions regarding the safety of drugs must be revised to include information on “a clinically significant hazard as soon as there is reasonable evidence of a causal association with a drug; a causal relationship need not have been definitely established.” There is no requirement for statistical significance.

    Clinicians are well aware that to be considered material, information regarding drug safety does not have to reach the same level of certainty that we demand for demonstrating efficacy. We understand that clinical trials that are designed to prove that a drug is effective use preplanned statistical analyses focused on a specific, carefully defined and adjudicated primary end point. Moreover, the number of subjects who will have to experience this targeted event for researchers to adequately test whether it occurs at the same rate as it does in a comparison group (the trial’s statistical power) is also established before the study begins. This same carefully constructed statistical framework is not, and understandably cannot be, used for evaluating unplanned and uncommon adverse events. When studying safety, we search for signals of imbalances and attempt to piece together multiple underpowered comparisons to obtain a better estimate of the risk. Sorting the wheat of true adverse drug effects from the chaff of biologic variability and chance associations is exceedingly difficult. A staggering and increasing number of reports are received by the FDA’s Adverse Event Reporting System (AERS) each year — more than half a million in 2009.

    The legal and medical systems both strive for truth while acknowledging that there are no absolutes. Both systems require evidence, which they categorize in a hierarchy of levels, on which to base decisions that can have major effects on the quality and even quantity of people’s lives. In law, the strictest standard of proof applies in criminal matters, in which the presumption of innocence requires that guilt be established “beyond a reasonable doubt” to attempt to rule out the possibility of convicting an innocent person — though of course the application of this level of proof carries the risk of occasional acquittal of a defendant who is actually guilty. A lower, but still relatively stringent, standard of proof, that of “clear and convincing” evidence, applies to certain discrete civil matters and criminal matters such as the setting of bail. A still lower standard requiring conclusions based on a “fair preponderance of the evidence” applies in the great majority of civil matters, and an even lower standard in which only “probable cause” must be established permits certain criminal proceedings to be initiated. These levels of legal proof have analogies in medicine. Clinical trials use alpha (significance) levels, confidence intervals, and statistical power to gauge levels of certainty. To reject the null hypothesis (that a result occurred merely by chance) and deem an intervention effective in a clinical trial, the level of proof analogous to law’s “beyond a reasonable doubt” standard would require an extremely stringent alpha level to permit researchers to claim a statistically significant effect, with the offsetting risk that a truly effective intervention would sometimes be deemed ineffective. Instead, most randomized clinical trials are designed to achieve a somewhat lower level of evidence that in legal jargon might be called “clear and convincing,” making conclusions drawn from it highly probable or reasonably certain.

    Although errors can be made by both the judicial system and the medical research system, the former provides the opportunity to appeal a court’s decision, and in the latter, reproducibility or independent confirmation of a result greatly enhances the reliability of findings. Unlike the categorical decisions of the courts, which immediately carry the weight of the law regardless of their popularity, the results of a clinical trial can have greater or lesser impact depending on their eventual degree of acceptance by the medical community. The data from clinical trials are generally initially disseminated in peerreviewed medical journals, at scientific meetings, or both. The ultimate influence of a study then depends on the interpretation of the importance of its results by national guideline-setting committees, as well as by more local physician groups at journal clubs, morning reports, and rounds. In such scholarly dissections of the trial data, statistical tests represent only one aspect of the intense scrutiny applied in assessing the quality and robustness of the findings.

    A normal circadian rhythm cortisol pattern is one in which there is a rise before waking (before 7-8 AM), and then a gradual decline throughout the rest of the day(26). Twelve patients had recognized circadian patterns of cortisol fluctuation, which will hereafter be referred to as Cortisol Pattern 1 (CP1). Of the twelve with CP1 whom were classified in the “normal” pattern, six had normal values at all four test time points, as well as normal total cortisol values or burden. Thus, of the 29 total charts reviewed, only 21% of the patients would be classified with normal cortisol values as well as normal patterns. The remaining six patients with a “normal” pattern of cortisol secretion (high waking and then decreasing over the rest of the day) had abnormal values at one or more time points.

    Seventeen of the 29 total patients fell into our classification of dysregulated circadian cortisol patterns.

    This consisted of patients with cortisol patterns that did not decrease in slope over the course of the day. Fourteen of these 17 patients had cortisol values out of normal range at one or more time points, or in total cortisol burden. The dysregulated patients’ cortisol plots fell into three distinct patterns, which we will hereafter call Cortisol Pattern 2 (CP2), Cortisol Pattern 3 (CP3), and Cortisol Pattern 4 (CP4).
    Viagra online
    Patients grouped into CP1 begin with a burst of cortisol between 7 and 8 AM (between 13-23 nM), and drop off slowly throughout the day. Normal values are considered 4-8 nM between 11 AM and noon, 4-8 nM between 4 and 5 PM, and 1-3 nM between 11 PM and midnight. It is notable that with the group of patients who presented with normal patterns, there was a large range in cortisol levels they excreted at any specific point. Those patients classified with CP2 tended to have abnormal circadian cortisol levels, higher in the late morning (11 AM to noon), compared to higher waking levels. Their levels then dropped off or stayed the same later in the day and at midnight. In contrast, patients grouped into CP3 had peaks between 7-8 AM and a more significant drop at noon. This created a second “mini” peak on the graph between 4-5 PM, though the levels are actually closer to normal. Patients assigned with CP4 fell into a distinct pattern of increasing between 11 PM and midnight. While a normal value falls between 1-3 nM for this time point, these patients averaged 15.4 nM (SD 8.3). One of the patients had extremely high cortisol levels at two time points, including time point 4 (between 11 PM and midnight). If the data is reanalyzed without this patient, the average cortisol value is still high with an average of 6.75 nM (SD 2.06).

    Typically, the highest cortisol value of the day occurs at the waking time point. Yet, of the patient charts reviewed, if levels were low at all, they were low at this time point. One patient with low total cortisol had consistently low cortisol levels, except the late night point, which is typically the lowest cortisol level. There were three patients who had consistently high cortisol levels, and two of these also had high total cortisol values.

    In healthy individuals, cortisol levels exhibit a circadian pattern, peaking in the morning and decreasing the rest of the day. Studies are inconclusive as to the relationship between high total cortisol levels and obesity, prediabetes, and type 2 diabetes. Since cortisol levels are circadian, many naturopathic physicians use a salivary test performed at four time points during the day to measure the overall pattern of secretion, rather than relying upon a blood draw from a single time point. Some physicians hypothesize that a dysregulated pattern of cortisol is more indicative of diabetes risk than a high mean cortisol level. A retrospective chart review was performed on obese, prediabetic, and type 2 diabetes patients in order to test this theory. The goals of this study were to determine whether people with, or at risk for, type 2 diabetes have abnormal circadian cortisol patterns and dehydroepiandrosterone (DHEA) levels. The chart review demonstrated four patterns of cortisol secretion, one of which is circadian, in this population.

    Type 2 diabetes is a multi-factorial disease characterized by mild to severe glucose dysregulation with associations to increased mortality, and the development of polyneuropathy, nephropathy, and heart disease. Diagnoses of obesity and/or prediabetes increases the risk of developing type 2 diabetes.(1,2) In healthy individuals, blood glucose concentrations are maintained between 80 and 100 mg/ dL. Ingestion of carbohydrates causes an increase in blood glucose concentration and subsequent release of insulin by beta-islet cells within the pancreas. Insulin lowers blood glucose both by decreasing hepatic and adipose glucose production, and by accelerating the uptake of glucose into peripheral tissues. One of the probable first steps in the development of type 2 diabetes is insulin resistance, defined by impaired sensitivity to a normal concentration of insulin. Insulin resistance is a common factor of obesity, prediabetes, and type 2 diabetes.

    In human studies, high cortisol has been shown to contribute to insulin resistance and is likely involved in the development of type 2 diabetes, as well as the persistence of high glucose levels. Cortisol is a glucocorticoid hormone produced by the adrenal cortex that is involved in the regulation of mineralocorticoids, blood pressure, immune function and metabolism. Conditions that involve excess cortisol are hypertension, hypercholesterolemia, central obesity, and glucose intolerance. In fact, one of the likely methods by which cortisol contributes to these diseases is by inducing a state of insulin resistance. As the primary glucocorticoid released during stress, cortisol has a variety of actions: 1) impairs insulin-dependent glucose uptake in the periphery, 2) enhances gluconeogenesis in the liver, and 3) inhibits insulin secretion from pancreatic b-islet cells. All of these actions contribute to elevated glucose levels. Dysregulated cortisol levels have been shown in persons with insulin resistance, prediabetes, and type 2 diabetes. Prediabetes is characterized by a fasting plasma glucose between 100-126 mg/dL. This is also known as Impaired Fasting Glucose (IFG) or Impaired Glucose Tolerance (IGT). Beyond 126 mg/dL is diagnostic of type 2 diabetes. Cortisol normally follows a circadian pattern of secretion, peaking 30 minutes after waking followed by a gradual decrease throughout the rest of the day. Cortisol should be lowest in the evening, allowing sleep at night. Due to the circadian nature of cortisol secretion, identification of cortisol dysregulation may not appear if measuring only total cortisol levels in blood at a single time point.

    Cortisol and dehydroepiandrosterone (DHEA) are produced in closely related metabolic pathways. DHEA is an additional factor to consider in the development of type 2 diabetes. The production of both DHEA and cortisol is regulated by the release of adrenocorticotropic hormone (ACTH) from the adrenal cortex. DHEA and DHEA-sulfate (S) are metabolic intermediates in the formation of the active sex steroids testosterone, dihydrotestosterone, and estrogen. In human studies, exogenously administered glucocorticoids reduce basal and ACTH-stimulated blood levels of DHEA and DHEA-S.14 Several studies have suggested that DHEA and DHEA-S are related to glucose and insulin regulation. A decrease of DHEA and DHEA-S is observed when humans are rendered hyperinsulinemic. In addition, a reduction in serum insulin is associated with an increase in serum DHEA and DHEA-S.17 Cortisol and DHEA can also be measured in saliva, and salivary levels have been found to correlate with plasma levels. In clinical practice, some physicians order salivary cortisol and DHEA tests for patients who have or are at risk for diabetes, or to diagnose and monitor adrenal function.

    The salivary cortisol test requires patients to collect saliva samples at home four times during one day, in the morning, noon, afternoon, and at night. Things that may compromise the sample are discouraged, such as specific behaviors. For example, smoking, posture, and eating can all influence salivation and may thus introduce artifact into the sample. As a result, patients are given very specific directions for collecting their saliva sample (described in detail in Methods section of this paper). A chart review was performed to examine the patterns of cortisol secretion and levels of DHEA in patients suspected of dysregulated glucose metabolism.
    Data was reviewed and collected from 29 patient charts from a naturopathic primary care clinic in Portland, OR. Informed consent for records review was obtained upon admission to the clinic. All data was coded to remove any identifiable information. In naturopathic clinics, individuals who are suspected of cortisol/DHEA dysregulation for reasons related to prediabetes or type 2 diabetes often undergo a clinical laboratory test called the Adrenal Stress Index™ (ASI™, Diagnos-Techs, Kent WA). ASI™ measures cortisol, DHEA, sIgA, and anti-gluten antibodies from saliva collected during the day. According to Diagnos-Techs, the analytic sensitivity observational of this test is 0.8 nM to 1.0 nM and the specificity of the immunoassay to cortisol is at 99% or greater.(25) In diabetics and those at risk for diabetes, naturopathic physicians often do this test to help establish etiology of the disease. The chart review was done on patients who were predominantly untreated or uncontrolled diabetics, and patients presenting with symptoms that indicated insulin resistance and/or adrenal dysfunction whom had an ASI recorded in their chart. Additional patient care, number of visits, and information on medication intake was unobtainable for review.

    Patients were instructed to follow the prementioned salivary collection protocol, but compliance was not assessed in the charts. Salivary samples were self-collected by the patient at four intervals in one day (between 7-8 AM, 11 AM-12 PM, 4-5 PM, and 11 PM-midnight). Patients were instructed not to eat or drink, use antacids, bismuth or mouthwash, or brush their teeth or smoke for 30-60 minutes before collecting the sample. They were also instructed not to eat more than one tablespoon of chocolate, onions, garlic, cabbage, cauliflower, or broccoli, or to drink coffee, tea, or caffeine on the day of collection. Patients were instructed to maintain a typical exercise regimen and activity level to obtain representative daily results. A sample consisted of saliva collected on a cotton roll held in the mouth until saturated and then placed in a 5 mL tube. Samples are refrigerated and mailed within 3 days. Samples are considered stable for a week at room temperature. The ASI™ tests were evaluated at Diagnos-Techs’ lab in Kent, WA.

    The saliva samples were analyzed for cortisol by ELISA. DHEA and DHEA(S) were analyzed by ELISA using pooled samples from the noon and afternoon time points. Data were entered and analyzed in Microsoft Excel. A total of 29 ASI™ tests in 29 patients were found. For 28 of these patients, DHEA levels were also available. Serum fasting blood glucose levels were available for 20 of these patients.