Article Abstract Breakthroughs in artificial intelligence (AI) hold enormous potential as it can automate complex tasks and go even beyond human performance in many contexts, including medicine. Recent studies showed that AI models are able to outperform clinicians at cancer screening, opening new avenues for optimizing the clinical workflows for faster, more robust and accurate diagnosis. However, these studies often failed to disclose enough information or materials to allow other researchers to reproduce the initial findings, seriously undermining their scientific value. Ensuring that these AI meet their potential, however, requires that these studies be scientifically reproducible. The recent advances in computational virtualization and AI frameworks are greatly facilitating the implementations of complex deep neural networks in a more structured, transparent, and reproducible way. In an international effort, we identified common obstacles hindering transparent and reproducible AI research and provide technological solutions to these obstacles with implications for the broader field. Adoption of these technologies will increase the impact of published deep learning algorithms and accelerate the translation of these methods into clinical settings.