{ "items": [ "\n\n
In our recent paper on the clinical pharmacology of tafenoquine (Watson et al., 2022), we used all available individual patient pharmacometric data from the tafenoquine pre-registration clinical efficacy trials to characterise the determinants of anti-relapse efficacy in tropical vivax malaria. We concluded that the currently recommended dose of tafenoquine (300 mg in adults, average dose of 5 mg/kg) is insufficient for cure in all adults, and a 50% increase to 450 mg (7.5 mg/kg) would halve the risk of vivax recurrence by four months. We recommended that clinical trials of higher doses should be carried out to assess their safety and tolerability. Sharma and colleagues at the pharmaceutical company GSK defend the currently recommended adult dose of 300 mg as the optimum balance between radical curative efficacy and haemolytic toxicity (Sharma et al., 2024). We contend that the relative haemolytic risks of the 300 mg and 450 mg doses have not been sufficiently well characterised to justify this opinion. In contrast, we provided evidence that the currently recommended 300 mg dose results in sub-maximal efficacy, and that prospective clinical trials of higher doses are warranted to assess their risks and benefits.
\n \n\n \n \nBackground: A critical and persistent challenge to global health and modern health care is the threat of antimicrobial resistance (AMR). Previous studies have reported a disproportionate burden of AMR in low-income and middle-income countries, but there remains an urgent need for more in-depth analyses across Africa. This study presents one of the most comprehensive sets of regional and country-level estimates of bacterial AMR burden in the WHO African region to date. Methods: We estimated deaths and disability-adjusted life-years (DALYs) attributable to and associated with AMR for 23 bacterial pathogens and 88 pathogen\u2013drug combinations for countries in the WHO African region in 2019. Our methodological approach consisted of five broad components: the number of deaths in which infection had a role, the proportion of infectious deaths attributable to a given infectious syndrome, the proportion of infectious syndrome deaths attributable to a given pathogen, the percentage of a given pathogen resistant to an antimicrobial drug of interest, and the excess risk of mortality (or duration of an infection) associated with this resistance. These components were then used to estimate the disease burden by using two counterfactual scenarios: deaths attributable to AMR (considering an alternative scenario where infections with resistant pathogens are replaced with susceptible ones) and deaths associated with AMR (considering an alternative scenario where drug-resistant infections would not occur at all). We obtained data from research hospitals, surveillance networks, and infection databases maintained by private laboratories and medical technology companies. We generated 95% uncertainty intervals (UIs) for final estimates as the 25th and 975th ordered values across 1000 posterior draws, and models were cross-validated for out-of-sample predictive validity. Findings: In the WHO African region in 2019, there were an estimated 1\u00b705 million deaths (95% UI 829 000\u20131 316 000) associated with bacterial AMR and 250 000 deaths (192 000\u2013325 000) attributable to bacterial AMR. The largest fatal AMR burden was attributed to lower respiratory and thorax infections (119 000 deaths [92 000\u2013151 000], or 48% of all estimated bacterial pathogen AMR deaths), bloodstream infections (56 000 deaths [37 000\u201382 000], or 22%), intra-abdominal infections (26 000 deaths [17 000\u201339 000], or 10%), and tuberculosis (18 000 deaths [3850\u201339 000], or 7%). Seven leading pathogens were collectively responsible for 821 000 deaths (636 000\u20131 051 000) associated with resistance in this region, with four pathogens exceeding 100 000 deaths each: Streptococcus pneumoniae, Klebsiella pneumoniae, Escherichia coli, and Staphylococcus aureus. Third-generation cephalosporin-resistant K pneumoniae and meticillin-resistant S aureus were shown to be the leading pathogen\u2013drug combinations in 25 and 16 countries, respectively (53% and 34% of the whole region, comprising 47 countries) for deaths attributable to AMR. Interpretation: This study reveals a high level of AMR burden for several bacterial pathogens and pathogen\u2013drug combinations in the WHO African region. The high mortality rates associated with these pathogens demonstrate an urgent need to address the burden of AMR in Africa. These estimates also show that quality and access to health care and safe water and sanitation are correlated with AMR mortality, with a higher fatal burden found in lower resource settings. Our cross-country analyses within this region can help local governments to leverage domestic and global funding to create stewardship policies that target the leading pathogen\u2013drug combinations. Funding: Bill & Melinda Gates Foundation, Wellcome Trust, and Department of Health and Social Care using UK aid funding managed by the Fleming Fund.
\n \n\n \n \nBackgroundCopper (Cu), an essential trace mineral regulating multiple actions of inflammation and oxidative stress, has been implicated in risk for preterm birth (PTB).ObjectivesThis study aimed to determine the association of maternal Cu concentration during pregnancy with PTB risk and gestational duration in a large multicohort study including diverse populations.MethodsMaternal plasma or serum samples of 10,449 singleton live births were obtained from 18 geographically diverse study cohorts. Maternal Cu concentrations were determined using inductively coupled plasma mass spectrometry. The associations of maternal Cu with PTB and gestational duration were analyzed using logistic and linear regressions for each cohort. The estimates were then combined using meta-analysis. Associations between maternal Cu and acute-phase reactants (APRs) and infection status were analyzed in 1239 samples from the Malawi cohort.ResultsThe maternal prenatal Cu concentration in our study samples followed normal distribution with mean of 1.92 \u03bcg/mL and standard deviation of 0.43 \u03bcg/mL, and Cu concentrations increased with gestational age up to 20 wk. The random-effect meta-analysis across 18 cohorts revealed that 1 \u03bcg/mL increase in maternal Cu concentration was associated with higher risk of PTB with odds ratio of 1.30 (95% confidence interval [CI]: 1.08, 1.57) and shorter gestational duration of 1.64 d (95% CI: 0.56, 2.73). In the Malawi cohort, higher maternal Cu concentration, concentrations of multiple APRs, and infections (malaria and HIV) were correlated and associated with greater risk of PTB and shorter gestational duration.ConclusionsOur study supports robust negative association between maternal Cu and gestational duration and positive association with risk for PTB. Cu concentration was strongly correlated with APRs and infection status suggesting its potential role in inflammation, a pathway implicated in the mechanisms of PTB. Therefore, maternal Cu could be used as potential marker of integrated inflammatory pathways during pregnancy and risk for PTB.
\n \n\n \n \nBackgroundProper evaluation of therapeutic responses in Chagas disease is hampered by the prolonged persistence of antibodies to Trypanosoma cruzi measured by conventional serological tests and by the lack of sensitivity of parasitological tests. Previous studies indicated that tGPI-mucins, an \u03b1-Gal (\u03b1-d-Galp(1\u21923)-\u03b2-d-Galp(1\u21924)-d-GlcNAc)-rich fraction obtained from T. cruzi trypomastigotes surface coat, elicit a strong and protective antibody response in infected individuals, which disappears soon after successful treatment. The cost and technical difficulties associated with tGPI-mucins preparation, however, preclude its routine implementation in clinical settings.Methods/principle findingsWe herein developed a neoglycoprotein consisting of a BSA scaffold decorated with several units of a synthetic \u03b1-Gal antigenic surrogate (\u03b1-d-Galp(1\u21923)-\u03b2-d-Galp(1\u21924)-\u03b2-d-Glcp). Serological responses to this reagent, termed NGP-Tri, were monitored by means of an in-house enzyme-linked immunosorbent assay (\u03b1-Gal-ELISA) in a cohort of 82 T. cruzi-infected and Benznidazole- or Nifurtimox-treated children (3 days to 16 years-old). This cohort was split into 3 groups based on the age of patients at the time of treatment initiation: Group 1 comprised 24 babies (3 days to 5 months-old; median = 26 days-old), Group 2 comprised 31 children (7 months to 3 years-old; median = 1.0-year-old) and Group 3 comprised 26 patients (3 to 16 years-old; median = 8.4 years-old). A second, control cohort (Group 4) included 39 non-infected infants (3 days to 5 months-old; median = 31 days-old) born to T. cruzi-infected mothers. Despite its suboptimal seroprevalence (58.4%), \u03b1-Gal-ELISA yielded shorter median time values of negativization (23 months [IC 95% 7 to 36 months] vs 60 months [IC 95% 15 to 83 months]; p = 0.0016) and higher rate of patient negative seroconversion (89.2% vs 43.2%, p < 0.005) as compared to conventional serological methods. The same effect was verified for every Group, when analyzed separately. Most remarkably, 14 out of 24 (58.3%) patients from Group 3 achieved negative seroconversion for \u03b1-Gal-ELISA while none of them were able to negativize for conventional serology. Detailed analysis of patients showing unconventional serological responses suggested that, in addition to providing a novel tool to shorten follow-up periods after chemotherapy, the \u03b1-Gal-ELISA may assist in other diagnostic needs in pediatric Chagas disease.Conclusions/significanceThe tools evaluated here provide the cornerstone for the development of an efficacious, reliable, and straightforward post-therapeutic marker for pediatric Chagas disease.
\n \n\n \n \nINTRODUCTION: Community engagement and participatory research are widely used and considered important for ethical health research and interventions. Based on calls to unpack their complexity and observed biases in their favour, we conducted a realist review with a focus on non-communicable disease prevention. The aim was to generate an understanding of how and why engagement or participatory practices enhance or hinder the benefits of non-communicable disease research and interventions in low- and middle-income countries. METHODS: We retroductively formulated theories based on existing literature and realist interviews. After initial searches, preliminary theories and a search strategy were developed. We searched three databases and screened records with a focus on theoretical and empirical relevance. Insights about contexts, strategies, mechanisms and outcomes were extracted and synthesised into six theories. Five realist interviews were conducted to complement literature-based theorising. The final synthesis included 17 quality-appraised articles describing 15 studies. RESULTS: We developed six theories explaining how community engagement or participatory research practices either enhance or hinder the benefits of non-communicable disease research or interventions. Benefit-enhancing mechanisms include community members' agency being realised, a shared understanding of the benefits of health promotion, communities feeling empowered, and community members feeling solidarity and unity. Benefit-hindering mechanisms include community members' agency remaining unrealised and participation being driven by financial motives or reputational expectations. CONCLUSION: Our review challenges assumptions about community engagement and participatory research being solely beneficial in the context of non-communicable disease prevention in low- and middle-income countries. We present both helpful and harmful pathways through which health and research outcomes are affected. Our practical recommendations relate to maximising benefits and minimising harm by addressing institutional inflexibility and researcher capabilities, managing expectations on research, promoting solidarity in solving public health challenges and sharing decision-making power.
\n \n\n \n \nBackgroundMalaria is a parasitic disease that affects many of the poorest economies, resulting in approximately 241 million clinical episodes and 627,000 deaths annually. Piperaquine, when administered with dihydroartemisinin, is an effective drug against the disease. Drug concentration measurements taken on day 7 after treatment initiation have been shown to be a good predictor of therapeutic success with piperaquine. A simple capillary blood collection technique, where blood is dried onto filter paper, is especially suitable for drug studies in remote areas or resource-limited settings or when taking samples from children, toddlers, and infants.MethodsThree 3.2\u00a0mm discs were punched out from a dried blood spot (DBS) and then extracted in a 96-well plate using solid phase extraction on a fully automated liquid handling system. The analysis was performed using LC-MS/MS with a calibration range of 3 - 1000\u00a0ng/mL.ResultsThe recovery rate was approximately 54-72\u00a0%, and the relative standard deviation was below 9\u00a0% for low, middle and high quality control levels. The LC-MS/MS quantification limit of 3\u00a0ng/mL is sensitive enough to detect piperaquine for up to 4-8\u00a0weeks after drug administration, which is crucial when evaluating recrudescence and drug resistance development. While different hematocrit levels can affect DBS drug measurements, the effect was minimal for piperaquine.ConclusionA sensitive LC-MS/MS method, in combination with fully automated extraction in a 96-well plate format, was developed and validated for the quantification of piperaquine in DBS. The assay was implemented in a bioanalytical laboratory for processing large-scale clinical trial samples.
\n \n\n \n \nBackgroundPlasmodium falciparum variant surface antigens (VSAs) contribute to malaria pathogenesis by mediating cytoadhesion of infected red blood cells to the microvasculature endothelium. In this study, we investigated the association between anti-VSA antibodies and clinical outcome in a controlled human malaria infection (CHMI) study.MethodWe used flow cytometry and ELISA to measure levels of IgG antibodies to VSAs of five heterologous and one homologous P. falciparum parasite isolates, and to two PfEMP1 DBL\u03b2 domains in blood samples collected a day before the challenge and 14 days after infection. We also measured the ability of an individual's plasma to inhibit the interaction between PfEMP1 and ICAM1 using competition ELISA. We then assessed the association between the antibody levels, function, and CHMI defined clinical outcome during a 21-day follow-up period post infection using Cox proportional hazards regression.ResultsAntibody levels to the individual isolate VSAs, or to two ICAM1-binding DBL\u03b2 domains of PfEMP1, were not associated with a significantly reduced risk of developing parasitemia or of meeting treatment criteria after the challenge after adjusting for exposure. However, anti-VSA antibody breadth (i.e., cumulative response to all the isolates) was a significant predictor of reduced risk of requiring treatment [HR 0.23 (0.10-0.50) p= 0.0002].ConclusionThe breadth of IgG antibodies to VSAs, but not to individual isolate VSAs, is associated with protection in CHMI.
\n \n\n \n \nAbstractStreptococcus pneumoniaeis a leading cause of invasive disease in infants, especially in low-income settings. Asymptomatic carriage in the nasopharynx is a prerequisite for disease, and the duration of carriage is an important consideration in modelling transmission dynamics and vaccine response. Existing studies of carriage duration variability are based at the serotype level only, and do not probe variation within lineages or fully quantify interactions with other environmental factors.Here we developed a model to calculate the duration of carriage episodes from longitudinal swab data. By combining these results with whole genome sequence data we estimate that pneumococcal genomic variation accounted for 63% of the phenotype variation, whereas host traits accounted for less than 5%. We further partitioned this heritability into both lineage and locus effects, and quantified the amount attributable to the largest sources of variation in carriage duration: serotype (17%), drug-resistance (9%) and other significant locus effects (7%). For the locus effects, a genome-wide association study identified 16 loci which may have an effect on carriage duration independent of serotype. Hits at a genome-wide level of significance were to prophage sequences, suggesting infection by such viruses substantially affects carriage duration.These results show that both serotype and non-serotype specific effects alter carriage duration in infants and young children and are more important than other environmental factors such as host genetics. This has implications for models of pneumococcal competition and antibiotic resistance, and leads the way for the analysis of heritability of complex bacterial traits.Significance statementOther than serotype, the genetic determinants of pneumococcal carriage duration are unknown. In this study we used longitudinal sampling to measure the duration of carriage in infants, and searched for any associated variation in the pan-genome. While we found that the pathogen genome explains most of the variability in duration, serotype did not fully account for this. Recent theoretical work has proposed the existence of alleles which alter carriage duration to explain the puzzle of continued coexistence of antibiotic-resistant and sensitive strains. Here we have shown that these alleles do exist in a natural population, and also identified candidates for the loci which fulfil this role. Together these findings have implications for future modelling of pneumococcal epidemiology and resistance.
\n \n\n \n \nIntroductionScrub typhus is a neglected tropical disease with an estimated 1 million cases annually. The Asia-Pacific region is an endemic area for scrub typhus, especially in Thailand.MethodsBetween June 2018 and December 2019, 31 patients with acute undifferentiated febrile illness (AUFI) were recruited for clinical trials and tested positive by a scrub typhus IgM RDT.ResultsOf the 17 buffy coat patient samples tested by 47kDa real-time PCR and 56kDa type-specific antigen (TSA) nested PCR, 94% (16/17) were positive, and of the 11 patients that presented with eschar lesions, 100% (11/11) of the eschar samples were confirmed positive. Genetic analysis of the 560 bp partial 56-kDa TSA gene demonstrated that most Orientia tsutsugamushi (Ot) infections were with Karp, Gilliam, Taiwan, P23, and CM606-like strains.DiscussionThis is the second occasion that the CM606-like and P23-like strains were reported in northern Thailand (first reported in 2011 and 2013, respectively). This study demonstrates that 1) the eschar remains the most reliable biological sample for PCR diagnosis of scrub typhus and 2) Northwestern Thailand has significant diversity of Ot strains, which underlines the requirement for ongoing surveillance to increase our understanding of Ot diversity to ensure accurate diagnostics and treatment.
\n \n\n \n \nAbstract\nBackground\nDengue is a mosquito-borne disease that causes over 300 million infections worldwide each year with no specific treatment available. Effective surveillance systems are needed for outbreak detection and resource allocation. Spatial cluster detection methods are commonly used, but no general guidance exists on the most appropriate method for dengue surveillance. Therefore, a comprehensive study is needed to assess different methods and provide guidance for dengue surveillance programs.\n\nMethods\nTo evaluate the effectiveness of different cluster detection methods for dengue surveillance, we selected and assessed commonly used methods: Getis Ord $${G}_{i}^{*}$$\n\nG\n\ni\n\n\n\n\u2217\n\n\n, Local Moran, SaTScan, and Bayesian modeling. We conducted a simulation study to compare their performance in detecting clusters, and applied all methods to a case study of dengue surveillance in Thailand in 2019 to further evaluate their practical utility.\n\nResults\nIn the simulation study, Getis Ord $${G}_{i}^{*}$$\n\nG\n\ni\n\n\n\n\u2217\n\n\n and Local Moran had similar performance, with most misdetections occurring at cluster boundaries and isolated hotspots. SaTScan showed better precision but was less effective at detecting inner outliers, although it performed well on large outbreaks. Bayesian convolution modeling had the highest overall precision in the simulation study. In the dengue case study in Thailand, Getis Ord $${G}_{i}^{*}$$\n\nG\n\ni\n\n\n\n\u2217\n\n\n and Local Moran missed most disease clusters, while SaTScan was mostly able to detect a large cluster. Bayesian disease mapping seemed to be the most effective, with adaptive detection of irregularly shaped disease anomalies.\n\nConclusions\nBayesian modeling showed to be the most effective method, demonstrating the best accuracy in adaptively identifying irregularly shaped disease anomalies. In contrast, SaTScan excelled in detecting large outbreaks and regular forms. This study provides empirical evidence for the selection of appropriate tools for dengue surveillance in Thailand, with potential applicability to other disease control programs in similar settings.\n
\n \n\n \n \n