Great Minds student internships

2022 Projects

Topics at the Africa Lab
in Johannesburg

Ref. code
Project description
SA-2022-01

Neuro-Symbolic AI for Natural Language Understanding

[ Project description | Close ]

Neural/sub-symbolic interpretations of logical inference and reasoning dates back to the first description of artificial neural networks and its use as threshold logic. However, symbolic AI was the dominant paradigm for decades during the advent of AI as it represented interpretable and general, human-like reasoning. Neuro-symbolism aims to combine the fault-tolerance, parallelism and learning of connectionism with the logical abstractions and inference of symbolism. Neuro-symbolism promises to combine the strengths of performing logical abstractions within connectionist settings. Neuro-Symbolic integration can be done for e.g. 1) propositionalization of raw data for a symbolic interpretation; and 2) predicate implementation to perform logical functions on ground propositions; and 3) predicate invention for rule induction and theory learning; and 4) implementing various logic reasoning constructs like modus ponens, inference, implication, entailment and modal logic.

In this project, new Neuro-Symbolic architectures and models will be developed to demonstrate the superiority of Neuro-Symbolic learning and reasoning in natural language understanding. This will spur the development of complementary approaches that combine deep learning advancements with symbolic AI to express their strengths and supplement their weaknesses.

The intern will run experiments on real world data, develop new models, and report the findings in scientific publication(s).

Requirement

  • Strong programming skills in Python
  • Strong analytical and problem-solving skills
  • Excellent communication and team skills
  • Experience with AI / machine learning techniques
  • Experience using essential Python libraries such as Scikit-learn, Theano, NumPy, Matplotlib
  • Experience with TensorFlow or PyTorch machine-learning frameworks
SA-2022-02

Improving Sub-Seasonal to Seasonal Climate Predictions

[ Project description | Close ]

Sub-Seasonal to Seasonal (S2S) climate prediction has long been a gap in operational weather forecasts. Its timescale varies from two weeks to an entire season, although some authors have recently used the term S2S more broadly to include seasonal forecasts up to 12 months ahead. S2S is considered more challenging than both numerical weather prediction (NWP) (1-15 days) and Seasonal forecasts (2-6 months) due to the limited predictive information from land and ocean and the weak predictive signal from the atmosphere. Improving S2S forecasts would significantly impact downstream applications such as streamflow forecasting, heatwave prediction, water resource management, and in-season climate-aware crop modeling on the sub-seasonal time scale.

In this project, a set of machine learning methods will be used to improve the skill and usability of the S2S data products. Ensemble physics-based S2S forecasts will be combined with historical land and ocean variables. This will improve forecasting of temperature and total precipitation ahead of current computational fluid dynamical forecasting models.

The intern will run experiments on real world data, develop new models, and report the findings in scientific publication(s).

Requirements

  • Strong programming skills in Python
  • Strong analytical and problem-solving skills
  • Excellent communication and team skills
  • Experience with AI / machine learning techniques
  • Experience using essential Python libraries such as Scikit-learn, Theano, NumPy, Matplotlib
  • Experience with TensorFlow or PyTorch machine-learning frameworks

Topics at the Africa Lab
in Nairobi

Ref. code
Project description
K-2022-01

Automated Subgroup Analysis of Post-COVID Condition Risk Factors and Interventions

[ Project description | Close ]

Post-COVID conditions are a wide range of new, returning, or ongoing symptoms that occur in individuals previously infected by the SARS-CoV-2 virus even if they did not have COVID-19 symptoms. Post-COVID symptoms typically occur 3 months from the onset of COVID-19, last for at least 2 months, and cannot be explained by an alternative diagnosis. To date, little is known about the prevalence, incidence, risk factors, and interventions for ameliorating post-COVID conditions.

The overarching goal of this research is to evaluate variations of care associated with Post-COVID conditions and related interventions. The specific objectives are multifold:

  1. To discover the segments of Covid-19 positive persons with higher-than-expected rates of post-COVID conditions
  2. To discover the segments of post-COVID condition persons with lower-than-expected rates of symptom resolution
  3. To examine the heterogeneous treatment effects of different Covid-19 treatment strategies to identify the subgroups of Covid-19 persons whose post-COVID conditions are most impacted by the treatment
  4. To examine the heterogeneous treatment effects of post-COVID condition treatment strategies to identify the subgroups of post-COVID condition persons whose symptom resolution are most impacted by the treatment

In this project, we will analyze the National COVID Cohort Collaborative (N3C) dataset provided and maintained by the National Center for Advancing Translational Sciences (NCATS), a component of the National Institutes of Health, United States (https://ncats.nih.gov/n3c/about/data-overview). The N3C dataset is a collection of clinical, laboratory, and diagnostic data about over 8 million persons with 2.7 million positive COVID-19 cases from multiple institutions in the United States as of October 2021. It is de-identified, aggregated, and harmonized in the NCATS N3C Data Enclave and has been made available for the research community to study COVID-19 outcomes, treatments, and interventions.

Data Preprocessing:

  • Define and extract the required cohorts of persons from the N3C dataset. These include the cohort consisting of COVID-19 positive persons, and the cohorts consisting of persons with post-COVID conditions, and the cohorts of persons with post-COVID conditions whose symptoms have resolved.
  • Identify and extract the covariates, COVID-19 treatments/interventions, and post-COVID condition treatments/interventions associated with each identified cohort.
Automatic Outcome Stratification:
  • Apply automated stratification via subset scanning over the covariate space of the cohort of Covid-19 positive persons to discover the segments with significantly higher-than-expected rates of post-COVID conditions.
  • Apply automated stratification via subset scanning over the covariate spaces of the cohorts of persons with post-COVID conditions to discover the segments with significantly lower-than-expected rates of post-COVID symptom resolution.
Heterogeneous treatment effect analysis:
  • For each treatment/intervention of interest used in the management of COVID-19, train a propensity score model to predict the likelihood of a person receiving the treatment given the person’s baseline covariates. Use the propensity score model to eliminate bias due to observable differences between the treated and non-treated persons using techniques such as propensity score weighting and matching. Subsequently, apply automated subgroup analysis via subset scanning over the covariate space of the treated persons in the cohort to identify the segments with significantly higher/lower-than-expected rates of post-COVID conditions. Compare the impacts of the Covid-19 treatment options on the post-COVID outcomes across subpopulations.
  • For each treatment/intervention of interest used in the management of a post-COVID condition cohort, train a propensity score model to predict the likelihood of a person receiving the treatment given the person’s baseline covariates. Use the propensity score model to eliminate bias due to observable differences between the treated and non-treated persons using techniques such as propensity score weighting and matching. Subsequently, apply automated subgroup analysis via subset scanning over the covariate space of the treated persons in the cohort to identify the segments with significantly higher/lower-than-expected rates of post-COVID symptom resolution. Compare the impacts of the post-COVID treatment options on symptom resolution across subpopulations.

References
Detection of Anomalous Patterns Associated with the Impact of Medications on 30-Day Hospital Readmission Rates in Diabetes Care

Identifying significant predictive bias in classifiers

Efficient discovery of heterogeneous treatment effects in randomized experiments via anomalous pattern detection

Estimating the effect of treatment on binary outcomes using full matching on the propensity score

Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies

K-2022-02

Future of Health: Transformation of Health Data in the Generation of Contextual Predictions

[ Project description | Close ]

Our team is focused on improving the process of evidence informed decision making, and we develop or extend tools from the space of Artificial Intelligence/Machine Learning to complement computational models already familiar in the domain of interest.

In this internship project our focus is in healthcare, and specifically on the transformation of the use of health record data and population level health data in the generation of contextual predictions of health or health risk, with meaningful measures of uncertainty.

All models are wrong, but some are useful, so in this work we will assess the utility of the augmented model predictions generated when compared with real data looking one week and one month ahead. These predictions are likely to be enabled by the following techniques: Reinforcement Learning (with specific intent on policy based methods), Probabilistic heuristic search methods, and time series analysis.

Related Reading
Wang, Quan, et al. "Knowledge graph embedding: A survey of approaches and applications." IEEE Transactions on Knowledge and Data Engineering 29.12 (2017): 2724-2743.

Bent, Oliver, et al. "Novel exploration techniques (NETs) for malaria policy interventions." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32. No. 1. 2018.

Walonoski, Jason, et al. "Synthea: An approach, method, and software mechanism for generating synthetic patients and the synthetic electronic health care record." Journal of the American Medical Informatics Association 25.3 (2018): 230-238.

Wachira, Charles M., et al. "A platform for disease intervention planning." 2020 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 2020.

Walonoski, Jason, et al. "Synthea™ Novel coronavirus (COVID-19) model and synthetic data set." Intelligence-based medicine 1 (2020): 100007.

Kerr, Cliff C., et al. "Covasim: an agent-based model of COVID-19 dynamics and interventions." PLOS Computational Biology 17.7 (2021): e1009149.

K-2022-03

Cross-Modal Representation Analysis in Dermatology Academic Materials

[ Project description | Close ]

Images depicting dark skin tones are significantly under-represented in the educational materials used to teach primary care physicians and dermatologists to recognize skin diseases. This could contribute to disparities in skin disease diagnosis across different racial groups. Previously, domain experts have manually assessed textbooks to estimate the diversity in skin images. Manual assessment does not scale to many educational materials and introduces human errors. To automate this process, we are working on a project that aims to automatically analysis representation of skin tones in dermatology academic materials, such as textbooks. This project is a cross-lab collaboration effort of IBM Research Labs in Nairobi (Kenya), Zurich (Switzerland) and New York (USA) along with external collaborations with researchers from academia including Stanford University. Current work focuses on extracting images from documents, selecting skin images, segmentation of skin pixels and estimation of skin tones1. A promising extension of the current work focuses on analyzing the textual content of the academic materials in addition to the imagery content in cross-modal setting to evaluate representation of subgroups (e.g., skin tones, sex, and age). The proposed project requires familiarity with recent natural language and image processing techniques.

Related Reading
G. A. Tadesse et al., Automated Evaluation of Representation in Dermatology Educational Materials, AAAI Workshop on Trustworthy AI in Healthcare, 2021.

Topics at the Europe Lab
in Zurich

Ref. code
Project description
Z-2022-01

Advancing AI Models for Document Conversion

[ Project description | Close ]

Documents are ubiquitous in everyday life. They are created at an ever increasing rate and encode often very valuable information. Unfortunately, they are often in complex formats such as PDF, which erase all their structure.
To extract the knowledge, we therefore need to have automatic methods to convert these documents back into programmatically accessible formats (JSON). To that end, we develop in our group state-of-the-art AI methods for interpreting the structure of the document (title, paragraph, table, figure, caption, ...) and their substructures (row/col of tables, axes-label from figures, etc).
If you are curious about these methods and would like to co-develop them with us, please reach out!

Requirements

  • Background in Artificial Intelligence and Machine Learning (familiarity with pytorch)
  • Programming skills (Python & git)
Z-2022-02

Graph Convolutional Networks to Find Hidden Knowledge in Large Document Graphs

[ Project description | Close ]

Knowledge can be catogerized into two components, i.e. a factual part and an hypothesized part. For the factual part, one can use graph structures, in which nodes represent entities (e.g. materials, properties, value-ranges, etc) and links represent the facts. For example, if we have the statement `Material A has property B of value C.`, we can represent this in a graphs as `node A` -> `node B` -> `node C`.
Beyond exploring the known facts in a specific knowledge domain, graphs also allow users to hypothesise. In reality, these hypotheses can be done in several ways. One way is to do link-prediction, i.e. inferring a link between two nodes which do not have a link but share many neighbours. Another way is to cluster the nodes (supervised, semi-supervised or unsupervised) and assume that nodes in the same cluster share similar attributes.
This form of "hypothesizing" on graphs can be done very efficiently with Graph Convolutional Networks. They are currently being explored on large document graphs in technical domains such as Material Science in order to predict properties of materials.
If you are interested about hypothesizing on graphs and would like to experiment with new technologies, please reach out!

Requirements

  • Background in Artificial Intelligence and Machine Learning (familiarity with pytorch)
  • Programming skills (Python & git)
Z-2022-03

Computer Vision for Deep Search in Bioactive Molecule Images

[ Project description | Close ]

Computer vision, in particular object detection and instance-segmentation methods, are of high importance for Deep Search in the bioactive molecule domain and generally in organic chemistry. The reason being that images of molecules (in scientific literature) and images of so-called Markush structures (in patents) contain crucial information that is not available in the documents' text.
The goal of this internship is to build object detection and instance-segmentation models to improve search of functional groups and substructures in molecule images, to enable similarity search of molecules (“finger printing”), and to investigate the alternatives of chemical compounds covered by generic Markush structures.

Requirements

  • Background in Artificial Intelligence and Machine Learning (familiarity with pytorch)
  • Programming skills (Python & git)
Z-2022-04

NLP for Material Science

[ Project description | Close ]

Natural Language Processing is a cornerstone technology to extract valuable information from documents. Despite the recent impressive progress that has been made in this field, there are still grand challenges for NLP, especially with regard to extracting data in specific technical fields.
Material-Science is one of these fields that is extremely hard to tackle with NLP, primarily due to its complex taxonomy and convoluted language (long sentences with complex structure). As such, good AI methods are absolutely essential to obtain NLP methods with satisfactory accuracy and performance.
In our group, we have assembled large document-collections in Material-Science (25M patents, 190M articles, etc) and have developed dedicated NLP models to detect key entities, such as Materials, Properties, Material-classes, attributes, etc. However, both detection (so-called Named-Entity-Recognition) and relationship extraction are still very challenging.
If you are interested in joining the team in order to advance these NLP models in the domain of Material-Science, please reach out to us!

Requirements

  • Background in Artificial Intelligence and Machine Learning (familiarity with pytorch)
  • Programming skills (Python & git)
Z-2022-05

NLP for Business-Insights

[ Project description | Close ]

Natural Language Processing is a cornerstone technology to extract valuable information from documents. Despite the recent impressive progress that has been made in this field, there are still grand challenges for NLP, especially with regard to extracting data in specific technical fields.
Extracting information related to Business-Intelligence from text and tables is still extremely hard to tackle with conventional NLP, primarily due to its complex taxonomy and convoluted language (long sentences with complex structure). As such, new AI methods are absolutely essential to obtain NLP methods with satisfactory accuracy and performance.
In our group, we have assembled large document-collections related to Business events (100K Annual-Reports, 400M News-Articles, etc) and have developed dedicated NLP models to detect key entities, such as companies, key-performance indicators (KPI), products, technologies, locations, persons etc. However, both detection (so-called Named-Entity-Recognition) and relationship extraction are still very challenging.
If you are interested in joining the team in order to advance these NLP models in the domain of Business-Intelligence, please reach out to us!

Requirements

  • Background in Artificial Intelligence and Machine Learning (familiarity with pytorch)
  • Programming skills (Python & git)
Z-2022-06

Enterprise NLP powered by ML and DL

[ Project description | Close ]

The IBM Research Laboratory in Zurich is leading the design of novel cutting-edge solutions customized to tackle challenging industry-specific Natural Language Processing (NLP) problems pertaining to specialized domains. The main goal is to replace or accelerate traditional human-supervised procedures with automated services leveraging Machine Learning and Deep Learning methods. Toward this goal, we are looking to strengthen our team with highly motivated interns that will contribute to the design and development of such solutions. The successful candidate will join our team at the Zurich Research Laboratory, having the opportunity to work in a unique research-corporate environment, and gather first-hand experience in developing novel AI services based on advanced Machine Learning and Deep Learning methods in the NLP domain.

Core activities
Our group is a diverse team with a wide set of technical skills. The intern will have a possibility to work between research and development on one or a combination of the following tracks:

  • Create novel, efficient, interactive data-driven approaches powered by machine learning and deep learning to address challenging NLP enterprise problems, and publish scientific findings in top NLP/AI Conferences
  • Closely collaborate with Subject Matter Experts (SMEs) to develop custom, domain-specific NLP innovation services that fit into existing complex business processes as well as improve and accelerate them
  • Design, develop and implement proof-of-concepts and prototypes to be ported and included on the IBM Public Hybrid Cloud offering, contributing to engineering efforts from design to implementation, solving complex technical challenges along the way and accelerating the transfer of research innovation to IBM products

Minimum qualifications

  • Bachelor’s degree in computer science or a related technical field or equivalent practical experience
  • Experience in software development with Python
  • Experience in one or more of the following: Machine Learning, Deep Learning, Natural Language Processing
  • Team player, self-motivated with a passion for technology and innovation

Preferred qualifications

  • Experience in PyTorch
  • Experience in algorithms and data structures
  • xperience in working in Unix/Linux environments
  • Independent worker with the ability to effectively operate with flexibility in a fast-paced, constantly evolving team environment
Z-2022-07

AI for Civil Engineering Applications

[ Project description | Close ]

Aging and deteriorating infrastructure (bridges, tunnels, dams, among others) is a struggle for companies around the world. With the cost of physical inspections and continued maintenance rising all the time, these companies need a better way to manage their current infrastructure. Indeed, roughly 50 billion dollars and two billion civil-engineering labor hours are spent annually monitoring bridges for defects. Asset managers need to identify elements to be repaired or replaced quickly, minimizing the lifetime cost of maintenance of their asset portfolio, without any compromise on safety and regulations. However, correct risk assessment and prioritization become a challenge when inspecting a single bridge takes from days to months.

Our team in Zurich has created a unique innovative solution based on a combination of Drone and AI technology to accelerate inspection of large civil infrastructures. Our technology has been validated and demonstrated on the 3rd longest suspension bridge in the world - the Storebaelt - while it applies to many other infrastructures such as building, dams, and wind turbines. Check out this video as a quick overview of our work.

In this project, the successful candidate will contribute to develop our portfolio of solutions based on machine learning and deep learning methods to accelerate the inspection of civil engineering infrastructures. The candidate has the chance to work on AI technology on client-provided data stemming from a real use case. Our work target integration in IBM major products, such as Maximo Visual Inspection (MVI) where we recently released capabilities around high-resolution images. Our team work in close connection with Maximo developers and therefore the candidate has the concrete chance to see result of his/her work into actual products used by thousands of customers.

The candidate will work at the IBM Research – Zurich Laboratory, in the AI Automation group, having the opportunity to work in a unique corporate environment, acquire experience in several areas, publish in top international conferences, learn how to patent innovative ideas, as well as deal with clients on real business cases. Our group consists of a highly motivated team of researchers and AI engineers. Our experience will lead and help the candidate to successfully complete the challenges of the proposed task. The candidate will have access to HPC and Cloud infrastructure equipped with recent variants of GPUs and many other resources and tools to perform the work.

Minimum qualifications

  • Bachelor’s degree in computer science or a related technical field or equivalent practical experience
  • Experience in software development with Python
  • Proficiency in working in Unix/Linux environments
  • Team player, self-motivated with a passion for technology and innovation

Preferred qualifications

  • Experience in one or more of the following: REST APIs, machine learning, deep learning, algorithms and data structures, test automation, distributed computing, CI/CD
  • Practical experience with Machine Learning / Deep Learning frameworks such as PyTorch
  • 3+ years of proved programming experience in Python (or equivalent C/C++ experience)
  • Independent worker with the ability to effectively operate with flexibility in a fast-paced, constantly evolving team environment
Z-2022-08

Extracting Chemical Information from the Chemical Literature

[ Project description | Close ]

We have developed numerous machine-learning algorithms for predicting the precursors or products of chemical reactions and recommending the procedures required to carry out reactions in the laboratory. Millions of patents provided the data essential to train these models. Thousands more further chemical reactions are described in articles published in the chemical literature. As a result, they have the ability to improve the algorithms' performance. They are, however, typically provided in a separate format, making them inaccessible to programs meant to extract information from patents. The goal of this project is to design new tools to extract chemical information from the text and images of articles published in the chemical literature.

Requirements

  • Strong background in natural language processing or computer vision
  • Basic chemical knowledge
Z-2022-09

Design of Novel Chemical Reactions

[ Project description | Close ]

Synthetic organic chemistry has always been concerned with the discovery of novel chemical reactions. Each new reaction adds to the arsenal of synthetic tools available and expands the possibilities for developing and optimising novel molecules. The majority of novel reactions have been discovered by chance, and it has been up to chemists to identify and investigate them in depth using their "chemical intuition." The goal of this research is to investigate machine learning-enhanced methodologies for designing new chemical reactions from publicly available chemical data.

Requirements

  • Strong background in programming
  • Good understanding of physics and chemistry
  • Good understanding of machine learning
Z-2022-10

Automating AI for Advanced Data-Driven Material Manufacturing

[ Project description | Close ]

The manufacture of materials generates a vast amount of data, which includes processing conditions, quality checks, and property measurements. The information contained in the data is frequently not fully explored since significant correlations are frequently hampered by the complexity. Machine learning algorithms assist in extracting knowledge from complex data, revealing previously undetectable insights. However, properly adjusting the model parameters may be time expensive, depending on the data structure.
The goal of this project is to design a tool to automatise the selection of the hyper parameters describing the machine learning architecture allowing a faster finer tuning allowing to better fit the model to the structure. The project will leverage assets build by other team members, that need to be adapted for the goal.

Requirements

  • Strong background in machine learning and programming
  • Team work
  • Basic knowledge of material science
Z-2022-11

Decentralized Digital Identity Platform and Use Cases

[ Project description | Close ]

IBM has a long history in the area of identity management being as a corerequirement of any trusted business relationship. Actors in any business relationship should be well identified and their messages to other parties authenticated.

The protection of authenticated messages can be accomplished by digital signatures, where the party (identity) sending the message is in possession of a digital key-pair that constitutes the authentication credentials for that identity. The public key is used by a verifying party to ensure that the message was indeed signed by the originating party.

The more challenging part is to securely map the authentication credentials to the actual user identities. Enterprises normally rely on trusted parties called Certificate Authorities to ensure the mapping between public keys and identities. Extending this functionality to cross-enterprise communication scenarios, or to scenarios that involve end-users/consumers, is not trivial.

Self sovereign identity (SSI) solutions are a relatively new approach to addressing this challenge. At the heart of any SSI system is a consortium that maintains a "Verifiable Data Registry"(typically a blockchain). An issuer's public key is written to a Decentralized Identity (DID) document into this registry. Verifiers can choose to trust the DID documents in the registry, which means they are trusting the registries governance backed by the consortium.

We are working towards a decentralized digital identity system that manages diverse digital identities, with a focus on addressing important use cases from the public sector, the financial industry, and other relevant industries.

The scope of our research includes but is not restricted to:

  • Analysis of the requirements related to decentralized identity for the relevant use cases.
  • Design/extension of decentralized identity solutions for the relevant use cases.
  • Implementation of the designed solutions.
  • Ensure interoperability with other identity systems and drive open-source contributions.

As an intern, you will investigate identity solutions for client use cases and have first-hand experience of building identity solutions for real-world systems.


Requirements

  • Familiarity with identity solutions, SSI, blockchain concepts
  • Programming skills in Java, Golang, or similar
  • Experience with DevOps and standard coding practices
Z-2022-12

Secure execution on a blockchain: Hyperledger Fabric Private Chaincode

[ Project description | Close ]

Hyperledger Fabric is a permissioned blockchain platform that offers common program execution on an infrastructure shared by multiple parties, of which no-one is trusted. Hyperledger Fabric Private Chaincode (FPC) enables the secure execution of chaincode using Intel SGX for Hyperledger Fabric. Intel SGX is the most prominent trusted execution environment (TEE) available today, it offers secure execution contexts called enclaves on a CPU, which isolate data and programs from the host operating system in hardware. The FPC project takes up technology from a research project at IBM Research Europe - Zurich.

Multiple projects are available in this context, primarily focusing on designing secure architectures and realizing additional security solutions on FPC. The work is experimental and uses cutting-edge technologies such as Intel SGX. Ideal candidates are already familiar with the concepts of trusted execution technology and C/C++/Golang programming language. Nature of the project: Theory 25%, Systems 75%.

This work is in collaboration with the Hyperledger open-source community.

Z-2022-13

CBDC-DID

[ Project description | Close ]

The rise of digital payments at the detriment of cash has stirred interest in a digital alternative that’s as resilient and reliable as cash – especially, in the face of natural disasters or large-scale infrastructure outages. This digital alternative is Central-bank Digital Currency (CBDC for short). CBDC is governments’ response to a fragmented payment landscape that’s primarily controlled by the private sector. CBDC is aimed to replace cash and offer similar guarantees: from being a store of value and medium of exchange, to enabling offline payments and anonymous transactions (to a degree).

Challenges pertaining to CBDC are diverse in terms of nature and scope: economic, regulatory and technical. Our research focuses on the technical challenges that if addressed correctly can offer answers to both the economic and regulatory ones. One important aspect of our work is digital identity and how it relates to CBDC transactions. Anonymity requirements mandate that users transact without revealing their identities. On the other hand, enforcing regulations requires monitoring and audit capabilities to detect suspicious transactions and trace them back to their originators. Not to mention that to avoid settings where users have as many identities as payment service providers, a certain level of identity interoperation is required.

Decentralized identity and zero-knowledge proofs can mitigate some of these tensions. Yet, interoperation, performance and revocation are still the main obstacles to viable identity solutions for CBDC. At IBM research, our task is to come up with such a solution.

The scope of our research includes but is not restricted to:

  • Collecting the requirements related to decentralized identity in the CBDC space.
  • Design of decentralized identity solutions for CBDC systems.
  • Implementation of the designed solutions on top of Hyperledger Fabric.

As an intern, you will investigate identity solutions for CBDC and have first-hand experience of building blockchain solutions for real-world systems.

Z-2022-14

Deep Learning Incorporating Biologically-Inspired Neural Dynamics and Learning

[ Project description | Close ]

Neural networks are the key technology of artificial intelligence that has led to breakthroughs in many important applications. These were achieved primarily by artificial neural networks that are loosely inspired by the structure of the brain, comprising neurons interconnected by synapses that are trained offline and fixed after deployment. Meanwhile, the neuroscientific community has developed the Spiking Neural Network model that additionally incorporates biologically realistic temporal dynamics in the neuron structure. Although ANNs achieve impressive results, there is a significant gap in terms of power efficiency and learning capabilities between deep ANNs and biological brains. One promising avenue to reduce this gap is to incorporate biologically-inspired dynamics and synaptic plasticity mechanisms into common deep-learning architectures. Recently, the IBM team has demonstrated a new type of ANN unit, called a Spiking Neural Unit (SNU), that enables us to incorporate the SNN dynamics directly into deep ANNs. Our results demonstrate competitive performance, surpassing state-of-the-art RNNs, LSTM- and GRU-based networks.

Furthermore, in another recent work on Online Spatio-Temporal Learning (OSTL), we provide a learning framework based on biological insights. OSTL provides an alternative to the backpropagation through-time (BPTT), enabling a new efficient approach to deep learning of temporal data without the BPTT’s requirement for unrolling through time. Such mode of operation enables continuous life-long learning that is closer to how humans learn.

In this project, we aim to investigate life-long online learning approaches in conjunction with biologically-realistic dynamics in deep networks. Specifically, the focus will be on incorporating SNUs into large-scale deep ANNs for processing continuous streams, such as visual or auditory sensory inputs, and interacting in virtual worlds. The main task will be to explore further online learning algorithms for life-long learning. These developments will allow us to assess the impact of biologically realistic aspects on important AI tasks, and indicate how to close the gap between deep learning and biological brains. The IBM team will provide extensive scientific guidance and access to a powerful GPU cluster.

Requirements

  • Experience with TensorFlow or PyTorch machine-learning framework
  • Strong programming skills in Python
  • Strong analytical and problem-solving skills
  • Excellent communication and team skills
Z-2022-15

Neurosymbolic Architectures to Approach Human-like AI

[ Project description | Close ]

Neither symbolic AI nor deep neural nets alone have reproduced the kind of intelligence expressed in humans. This is because, symbolic AI fundamentally lacks the ability to learn directly from examples, while neural nets are not able to dynamically bind information—an open problem that caused the persistent failure of neural nets to reuse knowledge and generalize systematically. In this project, we plan to combine the best of both worlds to approach human-level intelligence. Specifically, we will devise a novel look at data-driven representations, associated operations, and analog computing substrates that naturally enable them. For benchmarking, we will focus on solving abstract visual reasoning problems that mainly involve two aspects of intelligence: visual perception and abstract reasoning.

Z-2022-16

In-Network Computing

[ Project description | Close ]

Computing in-the-network is a system architecture paradigm promising benefits such as reduced load on CPUs, freeing up cores for other tasks, more predictable latency, and the ability to cope with high network bandwidths. Once viewed primarily as a control-plane connectivity paradigm, in-network computing is emerging rapidly as an intelligent data-processing accelerator of more complex processes and applications operating beyond the traditional perimeter. This internship aims at investigating the integration of domain-specific accelerators with cloud FPGAs targeting extreme-scale data processing. In addition, to meet the demands of modern cloud economics we will offload the control-plane provisioning of the standalone FPGAs to a serverless platform (e.g. Knative, OpenWhisk, etc.). The candidate will be given the opportunity to develop and evaluate his/her In-Network Computing solution in off-the-shelf FPGAs (e.g. Xilinx Alveo) or to study the scalability potential over the disaggregated cloudFPGA research platform that features the world-record density of 64 network-attached FPGAs per 2U-node.

Requirements
The research focus will be on exploring techniques for implementing efficient network-attached FPGA accelerated services for domain-specific workloads running on Cloud environment. It also involves interactions with several researchers focusing on various aspects of the project. The ideal candidate should be well versed in distributed systems, and have basic FPGA skills (VHDL/Verilog, C++ High-Level-Synthesis, Xilinx Vitis/VivadoHLS/Vivado) and programming skills (C++, Python). Experience with serverless platforms (Knative/OpenWhisk) would be desirable. It is not mandatory, but desirable to be familiar with CI/CD pipelines (Jenkins/TravisCI), distributed source control system (Git) and code documentation (Doxygen). Good oral and written English with good presentation skills would also be an asset.