2019 Great Minds student internships

Pitch your vision of the most exciting IT and social challenges to win an internship at IBM Research

Topics at the Africa Lab in Nairobi

Ref. code
Project description
A‑2019‑01

Learning from Simulation at Scale, for Malaria Policy-Making Process

[ Project description | Close ]

When deciding how to allocate resources for the control of infectious diseases such as malaria, policy-makers often struggle with the process of effectively leveraging the data and models at their disposal. When such data does exist, models and data are often hard to find, challenging to use, and difficult to contextualize. As a result, it is often the case that the investments made to develop these assets go to waste. A related challenge is that, while modern modeling assets are quite robust and high fidelity, it still is not realistic for human decision-makers to consider policies with more than two or three types of interventions at a time or more than a dozen evaluations. Moreover, these policies are often evaluated in the context of one type of “reward function” although there can be nuanced combinations of factors that affect the quality of the considered policy. We are working towards an at-scale platform for malaria policy making and decisions by learning from simulation and blending with additional data sources. The platform will enable distributed practitioners representing multiple facets of the malaria ecosystem to contribute their resources (data, models, and insights) to assist other users in the decision-making pipeline. These resources must be able to harness developments in machine learning to enable optimal policies of high dimension (i.e., more than two interventions) to be identified.

The intern will develop new AI/ML algorithms/models, run experiments (possibly in real-world settings) to test and evaluate the developed algorithms/models, and report the experimental findings in scientific publication(s).

Requirements

  • AI/machine learning techniques (e.g., reinforcement learning, deep learning, etc.)
  • Strong coding skills on essential Python libraries (e.g., Scikit-learn, Theano, NumPy, Matplotlib)
  • Frameworks such as Tensorflow, PyTorch, and Keras.
A‑2019‑02

Digital Twin: Decision Support System for Small-Scale Farmland

[ Project description | Close ]

In the coming years, the world will need ever more food to feed its growing populations under the stark shadow of climate change and rising economic inequality. The available farmable land may not be used efficiently: Half of farmers worldwide suffer post-harvest losses each year due to poor planting practices. Isolated efforts including AI, blockchain, and IoT technologies have already started to make headway. However, as food demand increases, these technologies supporting farming will have to improve and converge to keep pace.

This project will be part of a broad multilateral self-correcting food supply platform, the ultimate goal of which is to create a digital twin or a “virtual model” of the world’s farms. This digital twin could help prepare agriculture for the above challenge by democratizing farm data, allowing those in agriculture to share insights, research, and materials, and communicate data on farmland and crop growth across the planet while connecting and cross-referencing with the supply chain. The platform will potentially allow interested entities to research and monitor various factors that influence farms. Pulling from data that has been mined through existing systems in agriculture and other platforms, the analytics system can be used to provide critical data not only to growers, but to sellers, consumers, governments, and those looking to combat world hunger.

This internship project will focus on a very large-scale data analytics system specifically designed for massive geospatial-temporal data from maps, satellites, weather, drones, IoT, and other devices.

Requirements

  • AI/machine learning techniques (e.g., reinforcement learning, deep learning, etc.)
  • Strong coding skills on essential Python libraries (e.g., Scikit-learn, Theano, NumPy, Matplotlib) to streamline and process complex data (e.g., remote sensing data from satellite imagery)
  • Experience in frameworks such as Tensorflow, PyTorch, and Keras.

Topics at the Zurich Lab

Ref. code
Project description
Z-2019-1

Deep networks incorporating biologically-realistic spiking neural dynamics

[ Project description | Close ]

Neural networks are the key technology of artificial intelligence that has led to breakthroughs in many important applications. These were achieved primarily by artificial neural networks that are loosely inspired by the structure of the brain, comprising neurons interconnected by synapses. Meanwhile, the neuroscientific community has developed the Spiking Neural Network model that additionally incorporates biologically realistic temporal dynamics in the neuron structure. Although ANNs achieve impressive results, there is a significant gap in terms of power efficiency and learning capabilities between deep ANNs and biological brains. Therefore, a promising avenue to reduce this gap is to incorporate biologically realistic dynamics into common deep-learning architectures. Recently, the IBM team has demonstrated a new type of ANN unit, called a Spiking Neural Unit (SNU), that enables us to incorporate the SNN dynamics directly into deep ANNs. Our initial results show promising performance, surpassing state-of-the-art RNNs, LSTM- and GRU-based networks.

In this project, we aim to investigate further the advantages of biologically realistic dynamics in deep networks. Specifically, the focus will be on incorporating SNUs into large-scale deep ANNs for applications such as speech recognition, image understanding or text processing. The main task will be to extend our current TensorFlow-based framework and to explore biologically realistic dynamics in state-of-the-art architectures such as ResNet, Transformer or other attention-based networks. These developments will allow us to assess the impact of biologically realistic dynamics on important AI tasks, and indicate how to close the gap between deep learning and biological brains. The IBM team will provide extensive scientific guidance and access to a powerful GPU cluster.

Requirements

  • Experience with TensorFlow or PyTorch machine-learning framework
  • Strong programming skills in Python
  • Strong analytical and problem-solving skills
  • Excellent communication and team skills.
Z-2019-2

Simulation of a nanofluidic neuromorphic device

[ Project description | Close ]

Analog memory hardware devices for brain-like computing promise huge improvements in power efficiency compared to today’s GPUs. However, current implementations do not respond symmetrically with sufficient resolution to training inputs. We want to explore nanofluidic devices based on gold nanoparticles in water because they can be made symmetric by design. Existing simulations of Brownian motion in static 2D energy landscapes will be complemented with the simulation of electro-osmotic flows in these devices.

Requirements

  • Knowledge of Physics
  • Strong programming skills in C++/Python
  • Strong analytical and problem-solving skills
  • Excellent communication and team skills.
Z-2019-3

VO2 switches for neuromorphic devices

[ Project description | Close ]

We are looking for an outstanding summer student for our activities based on VO2 insulator–metal transition switches. The electrical resistance of VO2 changes by several orders of magnitude at the phase-transition temperature. We exploit this phenomenon to build electronic devices for neuromorphic computing applications.

We offer the opportunity to work in a state-of-the-art exploratory research facility with close interaction with leading experts in the fields of nanofabrication and nanoscale device measurements. You will join our nanoelectronics team and contribute to the design of nanostructures and their fabrication, as well as performing material and device characterization measurements. You will have the opportunity to work in a collaborative and creative group in a lively research environment.

Requirements

Applicants are expected to have a physics or engineering background in any of these topics: nanometer-scale science, nanofabrication, electrical device characterization or circuit design. The ideal candidate is very talented, creative, communicative and highly motivated. This position is available this summer for a duration of 2–3 months.

Z-2019-4

Electron devices using Weyl semi-metals

[ Project description | Close ]

We are looking for a summer student for our activities in electron devices using Weyl semi-metals, which have been demonstrated only recently. They form a novel material class and exhibit extreme properties such as record-high magnetoresistance or macroscopic scattering lengths. We have found that some Weyl semi-metals also exhibit hydrodynamic electron flow, i.e. charge no longer flows diffusively like in ordinary metals or semiconductors but behaves like a viscous liquid. The technological prospects of these characteristics are yet to be explored, and this project is part of a pioneering effort to do so.

We offer the opportunity to work in a state-of-the-art exploratory research facility with close interaction with leading experts in the fields of nanofabrication, nanoscale devices and low-level transport measurements. You will join our nanoelectronics team and perform experiments and electrical measurements to develop our Weyl semi-metal material platform. You will have the opportunity to work in a collaborative and creative group in a lively research environment.

Requirements

Applicants are expected to have a physics or engineering background in any of these topics: nanometer-scale science, nanofabrication, material or electrical characterization. The ideal candidate is adventurous, communicative and highly motivated. This position is available for a duration of 2–4 months.

Z-2019-5

Demonstration of interplay between neural network software algorithms and novel hardware accelerators

[ Project description | Close ]

To overcome compute bottlenecks for AI and machine-learning applications, we have developed different non-volatile, in-memory and optical technology candidates to build highly power-efficient analog compute engines. These engines accelerate algorithmic core operations like matrix vector multiplications, matrix transpose and parameter/weight updates. To assess performance at an early stage, we have also built a Python/TensorFlow simulation framework to interact directly with the physical hardware on the test bench (measurement setup). In this summer internship, the student will run and optimize MNIST concurrently on a host computer with different analog hardware engines on the test bench.

Z-2019-6

Characterization and control of non-volatile analog memory elements

[ Project description | Close ]

At IBM Research – Zurich, we have established several technologies and materials for the realization of nonvolatile resistive memory elements. In this internship, measurements will be performed on these materials and devices to characterize, interpret and understand the suitability for applications in the training of deep neural networks.

Z-2019-7

Machine learning for electronic tongues

[ Project description | Close ]

Cross-sensitive sensor arrays, also called “electronic tongues”, are a promising technology to generate unique chemical fingerprints of liquids and can be applied to various domains such as food safety, quality control or healthcare. Sensor arrays based on potentiometric measurements feature very low power consumption and are therefore suitable for portable or remote applications. Machine learning is an essential component of electronic tongues and involves both supervised and unsupervised learning and classification methods, depending on the context.

In this internship project, data from exploratory electronic tongue devices will be processed by machine-learning algorithms in order to classify different types of liquids, and to perform multivariate calibration in order to correlate electronic tongue data quantitatively with concentrations of dissolved compounds.

Recommended background

Basic knowledge of machine learning and tools for implementation, e.g. Python.

Z-2019-8

Privacy and ethics-compliant classifier for AI in healthcare

[ Project description | Close ]

The potential of AI in healthcare is tremendous, particularly in combination with data acquired from smart sensors. By tracking patients’ symptoms continuously and objectively, we can model the disease progress of individuals even outside of the hospital setting. As a result, a patient’s quality of life can be improved through preventive care and disease management.

However, data privacy and ethical best practices need to be respected during exploration, training and maintenance of such systems. In clinical trials, it is best practice to delete raw data at the end of a study. Thus, a follow-up project cannot use data harvested from the previous project. In other cases, patients might agree to the recording of sensitive personal data only as long as it is not shared, or even transferred to the cloud. Thus, data remains distributed on patients’ personal devices.

Considering these constraints, we offer an internship position to explore methodologies such as online, hierarchical, adaptive, and federated learning to unlock the full potential of AI algorithms in the field of healthcare. The student will establish and implement classifier designs compatible with spatial or temporal partitioned training data sets. The proposed training pipeline will be demonstrated and benchmarked for audio analytics purpose, to build a cough or activity-of-daily-living classifier without the need to centralize sensitive data and to extended it with additional classes from data acquired in subsequent clinical trials.

Z-2019-9

Enhancing a computational framework to analyze line and scatter plots

[ Project description | Close ]

Scientific documents such as papers, reports and patents but also other professional documents such as financial or medical reports very often include numerous graphs. The purpose of these graphs is to illustrate, in a graphical way, data sets that explain, describe or emphasize the textual content of those documents. Such data sets can be generated through experiments, measurements, observations or other means, and are uniquely depicted in graphs for the reader to extract the message in a fast and efficient way. With the emergence of internet searches, archival storage and the speed at which new scientific documents are created, it would be of great value to have a tool that can automatically scan numerous documents, extract the main scientific knowledge and present it in a concise and meaningful way. However, for a document to be completely and thoroughly analyzed, its graphs also need to be processed and the main knowledge, as presented by the depicted data sets, extracted. However, as such graphs are stored primarily as bitmap images, the data sets are frequently noisy, with the graphical symbols used to depict them, such as lines, markers and text, appearing in an overlapping, overriding or intersecting manner. At IBM Research – Zurich, we are developing computational techniques based on image processing and machine learning to identify graphical symbols automatically, extract their semantics and ultimately capture the data (knowledge) they represent. From the taxonomy of various graphs, we are currently focusing on line and scatter plots, phase diagrams and forms.

For our growth in the area of extracting knowledge from scientific graphs, we are looking for motivated candidates to enhance our computational framework in the analysis of line and scatter plots. Candidates should be studying Computer Science, Electrical Engineering or related fields, with experience and interest in deep learning, image processing and — ideally — pattern recognition.

Z-2019-10

Histopathology image analysis

[ Project description | Close ]

Pathology image In digital pathology, we focus on the analysis of digitized histopathology and molecular expression images, as well as cytology images. Imaging of tissue specimens is a powerful tool to extract quantitative metrics of phenotypic properties while preserving the morphology and spatial relationship of the tissue microenvironment. Novel staining technologies like immunohistochemistry (IHC) and in situ hybridization (ISH) further empower the evidencing of molecular expression patterns by multicolor visualization. Such techniques are thus commonly used for predicting disease susceptibility as well as for stratification and treatment selection and monitoring. However, translating molecular expression imaging into direct health benefits has been slow, which can be attributed to two major factors. On the one hand, disease susceptibility and progression is a complex, multifactorial molecular process. Diseases such as cancer exhibit tissue and cell heterogeneity, impeding our ability to differentiate between various stages or types of cell formations, most prominently between inflammatory response and malignant cell transition. On the other hand, the relative quantification of the stained tissue selected features is ambiguous, tedious and thus time-consuming and prone to clerical error, leading to intra- and interobserver variability and low throughput. At IBM Research – Zurich, we are developing advanced image analytics to address both the above limitations, aiming to transform the analysis of stained tissue images into a high-throughput, robust, quantitative and data-driven yet explainable science.

For our growth area in digital pathology, we are looking for motivated candidates to enhance and advance our computational framework. Candidates should be studying Computer Science, Electrical Engineering or related fields, with experience and interest in deep learning, image processing and pattern recognition.

Z-2019-11

Analysis of molecular and clinical data to integrate disparate types of data into models that can help risk-stratify patients

[ Project description | Close ]

Despite their great promise, high-throughput technologies in cancer research have often failed to translate into major therapeutic advances in the clinical environment. One challenge lies in the high level of tumour heterogeneity displayed by human cancers, which renders the identification of driving molecular alterations difficult, and thus often results in therapies that only target subsets of aggressive tumour cells. Another challenge lies in the difficulty of integrating disparate types of molecular data into mathematical disease models that can yield actionable clinical statements.

The Computational Systems Biology group at IBM Research – Zurich aims to develop new mathematical and computational approaches to analyze and exploit the latest generation of biomedical data. In the context of cancer, our group focuses on integrating high-throughput molecular datasets to build comprehensive molecular disease models, developing new approaches to reconstruct signaling protein networks from single-cell time-series proteomic data, and applying Bayesian approaches and high-performance computing to the problem of network reconstruction. An active line of research focuses on prostate cancer, a leading cause of cancer death amongst men in Europe, but also prone to over-treatment.

This internship will focus on the analysis of molecular (genomic, transcriptomic, and proteomic) and clinical data, and the use of the latest-generation cognitive technologies developed at IBM with the goal of integrating disparate types of data into models that can help risk-stratify patients. Candidates should have a strong background in computer science, machine learning, mathematics or physics and be interested in cancer-related research.

Requirements

  • Working knowledge of C or C++.
  • Working knowledge of Matlab, R or equivalent.
  • Comfortable knowledge of statistics and mathematical modeling.
  • Some knowledge of molecular biology, genetic and systems biology, as well as high-throughput technologies for the molecular characterization of cancer samples would be beneficial, but not essential.
Z-2019-12

Big Data time series analysis using deep learning

[ Project description | Close ]

Analysis of continuous and discrete-valued time series is essential for the intelligent management of complex systems in a range of industries. Predictive maintenance aims to predict system failures before they occur, preventing the consequences of outages and costly repairs.

Machine learning is widely applied for understanding, forecasting and predicting based on time-series data. Deep-learning techniques show excellent performance in discovering hidden patterns when large amounts of data are available. However, the applicability and business value of such techniques is largely impacted by the subtleties of modelling and the quantity, quality and freshness of data used for training.

The successful candidate will have the opportunity to apply and perfect state-of-the-art machine-learning methods for time-series analyses to predict failures of real-world industrial systems, and/or work on a highly scalable Big Data infrastructure that enables training and deployment of machine-learning models in a reliable manner.

Requirements

  • Solid background in statistics, probability theory, and machine learning
  • Hands-on experience using machine-learning algorithms, specifically deep learning
  • Experience with GPU-accelerated scientific libraries for machine learning
  • Strong programming skills in Python.

Desired expertise

  • Familiarity with time-series analysis
  • Hands-on experience with large-scale data processing techniques
  • Familiarity with Big Data technologies such as Spark and Kafka
  • Good programming skills in Scala.
Z-2019-13

Explainable deep neural networks

[ Project description | Close ]

This project involves explainable deep learning in the healthcare domain. Our team has been developing medical decision support systems that can improve patient care by assisting medical professionals. These systems must be robust, trustable and explainable. Especially in healthcare, it is very important to obtain consistent reasoning behind all decisions made. Neural network models show relatively high performance with regard to several tasks needed to build such systems. The applicability of these models in real life requires such models to be stable regarding the predictions and explanations they produce. We are currently investigating potential evaluation metrics for the fragility of these explanations and the connections between prediction stability and explanations. Reaching a sufficient level of trust in automated approaches in healthcare yields cheaper, more widely accessible medical assessments as well as more accurate — and more personalized — precision medicine. The focus of the internship will be

  • Research on robustness and explainability metrics of end-to-end neural architectures
  • Performance improvement of the implemented models that are being used for several tasks, e.g., natural language processing, patient risk assessment, questions generation etc.
  • Help with the development and deployment of our application.

Z-2019-14

Cognitive solutions for challenging NLP and text-mining problems on very large, domain-specific text documents

[ Project description | Close ]

We are developing cognitive solutions for challenging NLP and text-mining problems on very large, domain-specific text documents. In one of our cognitive projects, we first aim to discover text excerpts that are of interest for the target domain in large text documents and then automatically classify them. Furthermore, providing an understandable, short text summarization of a larger document and locating text excerpts very similar to a given short target text is also of great interest because it can enable a great amount of automation when analyzing and interpreting very large text documents.

To achieve these goals, we need to address challenging text classification, text summarization, text similarity and text search problems that require adaptation and application of state-of-the-art machine learning, deep learning and NLP techniques. Another challenge when dealing with domain-specific text is that, although very rich ontologies exist for the common knowledge domain to improve the quality of text search results (e.g. DBpedia, Freebase, YAGO), this is not the case for many other domains. Therefore, approaches for (semi-)automatic ontology extraction for the target domain from a large amount of relevant domain-specific text corpora are also of interest in this project.

Requirements

  • Computer Science background
  • Strong programming skills in Python or similar
  • Strong analytical and problem-solving skills
  • Excellent communication and team skills
  • Excellent English skills (written and spoken)
  • Experience with deep learning, machine learning, NLP, text mining, software engineering and Big Data analytics is a plus.
Z-2019-15

Implementation and testing of a novel approach for text detection

[ Project description | Close ]

detecting text in a challenging imageData usage is essential for most businesses today. This requires making use of textual data stored in digital form. Unfortunately, essential texts are often in the form of images, but not available in digital form. When this is the case, the first step is to detect a text, then to extract it. This project is about text detection in challenging images such as the one shown here, i.e. between scene text detection and text detection on documents.

Recent developments in the field of image processing have led to a new concept developed at IBM Research – Zurich to achieve text detection in images such as the one shown here. The concept is based on generative models, which makes the approach scalable to multiple types of pictures and backgrounds. The concept, protected by a patent application, has already been demonstrated.

The main tasks of the project are to further test and improve the existing implementation (70%) and to develop a cloud service for scalable distribution (30%). The approximate duration of the project is 5 months.

Requirements

  • Advanced knowledge of Python and machine-learning techniques, particularly GANS
  • Advanced knowledge of state-of-the-art development tools such as docker
  • Motivation to work in this exciting field.
Z-2019-16

Generic decision support system from documents

[ Project description | Close ]

This project touches two strategic assets developed at IBM Research – Zurich. The first is our form understanding tool, which is able to extract structured information from complex documents such as forms. The second asset is our generic decision support system, which has been trained with a set of samples to generate an instance capable of generating next-best questions to interact with the user and to arrive at a decision in the fastest possible way. The natural step, now that both assets have reached maturity, is to integrate both systems to allow novel use cases that take unprocessed documents as input and generate a decision support instance capable of interacting with users in the field of the given documents.

The goal of this project is first to integrate our in-house decision support system with our in-house form understanding tool (60%), then to demonstrate the overall tool end-to-end with existing dataset of digital document (40%). The estimated duration of the project is 5 months.

Requirements

  • Advanced knowledge of Python
  • Advanced knowledge of state-of-the-art development tools such as docker
  • Motivation to work in this exciting field.
Z-2019-17

Blockchain core and application development

[ Project description | Close ]

We are looking for highly motivated interns to join our advanced research and development activities in the area of industry platforms and blockchain. Ideal candidates are familiar with blockchain, security and distributed systems technology.

Depending on their background, candidates may contribute to extensions of Hyperledger Fabric or work on blockchain applications and on extending trust to the physical world using the concept of crypto anchors.

Requirement

  • Experience with DevOps and standard coding practices.

AI for Social Good

Possible at Nairobi, Johannesburg or Zurich Labs

Ref. code
Project description
AI-2019-1

Fairness in AI-based skin cancer diagnosis

[ Project description | Close ]

Light-skinned people have the highest risk for developing skin cancer, but the mortality rate for African-Americans in the United States is much higher, primarily due to misdiagnosis. Now that machine-learning methods are achieving superhuman performance in melanoma detection and classification, it is important that past disparaties not be propagated in learned models. This project will utilize the AI Fairness 360 open-source toolkit and develop new methods for making AI-based skin cancer diagnosis models fair for all populations of the world.

References

Requirements

  • Hands-on experience with computer vision and machine learning
  • Coding experience in Python
  • Strong analytical and problem-solving skills
  • Excellent communication and team skills.
AI-2019-2

Trustworthy AI Pentathlon

[ Project description | Close ]

Machine-learning models are achieving very high accuracies for various tasks, but accuracy is not a strong enough criterion to earn users’ trust, especially for high-stakes decision making. Several other criteria are also important, including explainability, fairness, robustness to dataset shift, and robustness to adversarial examples. This project will aim to develop benchmarking datasets, baseline models, and a contest for machine-learning researchers to evaluate their models on all five aforementioned criteria. The project may utilize the Python open-source Adversarial Robustness Toolbox and AI Fairness 360 toolkit.

Requirements

  • Hands-on experience working with real-world data
  • Hands-on experience with machine learning
  • Coding experience in Python.

Reference

Building Trust in AI the IBM Way,”
ZDNet Video, 2018.