top of page

Research Blog

Search

The integration of algorithms in decision-making processes has raised critical questions about fairness. While we strive to develop measures to detect bias in algorithms, ensuring fairness extends beyond mere detection. The underlying issue is not solely technical; it is deeply rooted in ethical norms and societal standards.

Diagnostic approaches that companies like IBM, Microsoft, and Google are developing to tackle algorithmic fairness may not be comprehensive, but they are a step towards accountability. However, fairness in algorithms cannot be achieved through static measures alone. Instead, it requires a dynamic approach that considers the evolving contexts in which these algorithms operate.

The discourse on algorithmic fairness is further complicated by impossibility theorems, which suggest that some fairness criteria cannot be satisfied simultaneously in certain decision-making contexts. This is particularly evident in the use of assessment tools in the criminal justice system, where algorithms such as COMPAS have been criticized for potential racial biases.

For a more equitable future, we must consider the broader implications of algorithmic decision-making. This entails a continuous evaluation of outcomes, transparency in the criteria used, and the willingness to adapt and refine algorithms as we uncover biases. Only through a sustained commitment to these principles can we hope to harness the power of algorithms for fair and just decision-making.

Abstract: The emergence of deep learning and advanced neural network architectures has catalyzed a paradigm shift in the development of recommendation systems. This research document delves into the innovative applications of autoencoder neural networks, neural collaborative filtering, and deep learning algorithms, which have significantly enhanced the personalization engines of leading e-commerce and social media giants like Amazon, Walmart, Facebook, and Twitter.

Introduction: Recommendation systems are the silent engines driving user engagement and sales across digital platforms. Traditional collaborative filtering methods have evolved, giving way to sophisticated machine learning techniques that deliver unparalleled personalization and predictive power. This research highlights the transformative impact of these techniques in today's hyper-connected digital ecosystem.


Advancements in Neural Networks for Recommendation Systems: Autoencoder neural networks, known for their efficiency in dimensionality reduction and feature learning, have been adapted to distill complex user-item interaction data into actionable insights. By capturing non-linear relationships and latent factors, autoencoders offer a nuanced understanding of user preferences.

Neural Collaborative Filtering (NCF): NCF represents a breakthrough in recommendation systems, combining the strengths of traditional collaborative filtering with the representation learning capabilities of neural networks. This approach allows for more accurate predictions of user behavior by modeling the complex and non-linear interactions between users and items.

Deep Learning for Enhanced Accuracy: Deep learning algorithms have been pivotal in processing large volumes of data, learning intricate patterns, and delivering real-time recommendations. These algorithms have been fine-tuned to adapt to the dynamic nature of user preferences, enabling platforms to recommend content that resonates with users at an individual level.

Interchangeable Modular Systems: Our research introduces a modular approach, where different neural network models can be interchanged and combined to cater to specific aspects of recommendation, such as temporal dynamics, content analysis, and social network influences. This flexibility allows for the continuous evolution of the recommendation engine, adapting to new trends and user behaviors.


The Role of Betweenness in Personalization: Betweenness theory, applied to the context of social networks, plays a crucial role in understanding how information flows between users. By identifying key influencers and bridge-communities, recommendation systems can strategically position products and content to enhance visibility and engagement.

Content-Content and Content-Collaborative Filtering: The integration of content-based and collaborative filtering techniques ensures that users receive recommendations that are not only similar to their past preferences but also popular among similar users. This hybrid approach leverages both the content's attributes and user interaction data, leading to a more comprehensive recommendation strategy.

Social Network Analysis for Personalization: Social network analysis provides a deep understanding of the relationships and interactions among users. By applying machine learning to this social graph, our systems can predict not only based on direct preferences but also on the influences within a user's social circle, leading to a more socially aware recommendation process.

Conclusion: The application of deep learning and neural network technologies has ushered in a new era for recommendation systems. This research presents a suite of advanced methodologies that have been tested and proven to enhance the personalization capabilities of online platforms. The intersection of collaborative filtering, betweenness theory, and social network analysis paves the way for the next generation of intelligent recommendation systems that are precise, user-centric, and socially attuned.


Abstract: Machine Learning (ML) stands at the vanguard of the fourth industrial revolution, presenting a transformative influence on business processes. This document delves into the groundbreaking discoveries that have been rigorously tested and implemented, showcasing a proven track record in revolutionizing traditional business operations.

Introduction: The integration of Machine Learning algorithms into business processes is no longer a luxury but a necessity for staying competitive. The strategic application of ML is fundamentally altering how businesses operate, from automating routine tasks to providing deep insights into consumer behavior.

Unleashing Efficiency with Automation: Our discovery utilizes advanced ML algorithms to automate complex business processes, leading to significant gains in efficiency and accuracy. ML-powered systems can handle vast amounts of data, learn from them, and perform tasks that traditionally require human intervention, such as customer service through intelligent chatbots or predictive maintenance in manufacturing.

Enhanced Decision-Making with Predictive Analytics: The application of predictive analytics has proven to be a game-changer. ML algorithms can forecast trends, anticipate market changes, and adapt strategies proactively. These capabilities empower businesses to make informed decisions swiftly, staying ahead of market curves.

Customer Insights and Personalization: A breakthrough in our research is the ability of ML to glean insights from consumer data, enabling hyper-personalization. Businesses can now tailor their offerings to individual preferences, leading to enhanced customer satisfaction and loyalty.

Operational Agility and Risk Management: ML algorithms have been pivotal in introducing operational agility by optimizing logistics, inventory management, and supply chains. Risk management has also seen a renaissance with ML's ability to identify and mitigate potential risks before they materialize, safeguarding assets and reputation.

Sustainability and Resource Optimization: Our studies show that ML can optimize resource utilization, leading to more sustainable business practices. By predicting demand and optimizing delivery routes, businesses reduce their carbon footprint and contribute to a greener planet.

Ethical Framework and Responsible AI: The document underscores the importance of an ethical framework in deploying ML. Our discovery ensures responsible use of AI, with a focus on fairness, accountability, and transparency in algorithmic decision-making.

Conclusion: The proven applications of Machine Learning algorithms documented here represent a seismic shift in business processes. Our discoveries and implementations underscore the immense potential of ML to not only enhance profitability but also to forge a path towards more intelligent, customer-centric, and sustainable business practices.


2

Title: Advanced Neuroimaging Biomarkers and Multi-Modal Data Fusion for Dynamic Disease Progression Modeling

Screenshot 2023-09-12 at 12.51.54 PM.png

​

The field of neuroinformatics is on the verge of a paradigm shift, thanks to the groundbreaking research project "Advanced Neuroimaging Biomarkers and Multi-Modal Data Fusion for Dynamic Disease Progression Modeling." This ambitious undertaking, driven by the power of data, machine learning, and cutting-edge neuroimaging techniques, is poised to revolutionize our understanding of neurodegenerative diseases, particularly Alzheimer's Disease (AD). In this comprehensive analysis, we delve into the core aspects of this project and explore the potential state-of-the-art discoveries it holds.

 

Project Overview:

At its core, this project harnesses the extensive and invaluable dataset provided by the Alzheimer's Disease Neuroimaging Initiative (ADNI) to pioneer advancements in neuroimaging biomarkers and dynamic disease progression modeling. ADNI offers a treasure trove of longitudinal data encompassing neuroimaging scans, clinical assessments, genetics, and more. This data serves as the bedrock upon which this transformative research is built.

 

Key Objectives:

​

  • Unearthing Novel Biomarkers: A primary mission of this project is to uncover novel biomarkers associated with neurodegenerative diseases. These biomarkers may manifest as structural brain changes, functional variations, or genetic indicators that underlie the progression of these diseases. By identifying these markers, we can gain deeper insights into the pathological processes at play.

​

  • Integration of Multi-Modal Data: This research pioneers the integration of multi-modal data sources. By harmonizing neuroimaging data with clinical evaluations, genetic profiles, and other pertinent information, it aspires to create a comprehensive and holistic view of disease progression. This synergistic approach promises to provide a more nuanced understanding of neurodegenerative disorders.

​

  • Dynamic Disease Progression Modeling: The project's dynamic disease progression models represent a significant leap forward. Conventional models often overlook the ever-evolving nature of neurodegenerative diseases. These models will be tailored to individual patients, capturing the intricate and personalized trajectories of disease progression. This allows for the development of more precise and personalized treatment strategies.

 

Potential State-of-the-Art Discoveries:

 

  • Early Detection Revolution: The project is poised to uncover new biomarkers that facilitate early disease detection. Early diagnosis is a game-changer in neurodegenerative diseases, offering the possibility of interventions before irreversible damage occurs.

​

  • Precision Medicine: The dynamic disease progression models will open the door to personalized treatment approaches. Tailored interventions based on an individual's disease trajectory have the potential to significantly enhance patient outcomes and quality of life.

​

  • Trailblazing Data Fusion: This project's innovative approach to integrating multi-modal data is likely to set new standards in the field. It could lead to the development of comprehensive diagnostic tools that offer a more holistic view of neurodegenerative diseases, transforming diagnosis and treatment.

​

  • Scientific Advancement: The methodologies and findings of this project will be invaluable contributions to the scientific community. They will serve as foundational knowledge for further research and collaboration, propelling the field of neuroinformatics into uncharted territory.

​

  • Interdisciplinary Potential: The project bridges diverse fields, from neuroscience to data science and beyond. Collaborators from various backgrounds can bring their unique expertise to the table, fostering innovation through diverse perspectives.

 

Therefore, "Advanced Neuroimaging Biomarkers and Multi-Modal Data Fusion for Dynamic Disease Progression Modeling" represents a watershed moment in neuroinformatics research. With the potential to make state-of-the-art discoveries, this project invites collaborations from researchers, academicians, and institutions eager to push the boundaries of knowledge. Together, we can unlock new insights, revolutionize diagnostics and treatments, and profoundly impact the lives of individuals affected by neurodegenerative diseases. Join us in this collective endeavor to redefine the future of healthcare and neuroscience.

Biomarker Discovery from Rat Gene Expression for Intervertebral Disc Degeneration - 
Bamidele Ajisogun

Screenshot 2023-09-14 at 12.17.20 AM.png
Screenshot 2023-09-14 at 12.17.33 AM.png

ABSTRACT

Intervertebral disc degeneration (IDD) is a common musculoskeletal disorder that can cause back, or neck discomfort and chronic pain associated with aging. The degeneration of the nucleus pulposus (NP) cells, the central component of the intervertebral disc, leads to dehydration, loss of disc height, disc distortion, and segmental instability. Identifying biomarkers for IDD can aid in diagnosis, monitoring, and developing precise treatments for the condition. Gene expression data from young and old, male, and female rat intervertebral disc (IDD) tissue types, along with known extracellular matrix-related genes from related human tissues were analyzed to discover potential biomarkers. Machine learning techniques of Logistic Regression, Support Vector Machines, Random Forest, Naïve Bayes, and Rule Learner were utilized to analyze the genes and discriminate between the nucleus pulposus (NP) and annulus fibrosus (AF) tissue types. This study presents the feasibility of a knowledge Augmented Rule Learner (KARL) to provide accurate and interpretable models, which can be useful as an efficient integrative biomarker discovery tool for diagnosing and treating IDD in precision medicine. The dataset contains 16,378 genes and 38 samples distributed across tissue types, gender, and age. Further research is necessary to validate the identified biomarkers and understand their role in the disease process.

We analyzed the results of the machine learning algorithms from which logistic regression performed the best and compare them with the framework of the Knowledge Augmented Rule Learning (KARL), which incorporates two sources of knowledge, domain, and data, for pattern discovery from small and high-dimensional datasets. We examined the effectiveness of KARL as a transfer rule learning framework in which knowledge of the domain is transferred to the learning process on data to 1) improve the reliability of the discovered patterns, and 2) study the knowledge of the domain when compared with the results of the machine learning algorithms for modeling. In this work, we generated KARL models on gene expression datasets for six data tissue types of the rat data. As our knowledge of the domain, we used the Ingenuity Knowledge Base (IKB) to extract genes related to hallmarks of IDD from human extracellular matrix data and annotated these prior relationships before learning classifiers from these datasets.        

Our results revealed that the Machine learning model of Logistic Regression was effective in identifying the biomarkers responsible for the IDD disease. However, KARL produces, on average, rule models that are more robust classifiers than the baseline without such background knowledge, for our tasks of IDD prediction using the gene expression datasets. Moreover, KARL served as an integral approach and helped us learn insights about previously known relationships in these gene expression datasets, along with new relationships not input as known, to enable informed biomarker discovery for IDD prediction tasks. KARL can be applied to modeling similar data from any other domain and classification task. Future work would involve extensions to KARL to handle ranked knowledge to derive more general hypotheses to drive biomedicine.

Building an Advanced Recommendation System using Neural Collaborative Filtering (NCF) and Variational Autoencoders (VAEs)

Image by BoliviaInteligente

ABSTRACT

Recommender systems are widely used to offer personalized recommendations for goods or services. Machine learning algorithms have been increasingly adopted by these systems in recent years, and selecting the appropriate algorithm is crucial. However, little guidance is available regarding the current usage of algorithms in recommender systems, and there are many challenges to creating effective recommender systems using machine learning algorithms. In this project, we build an advanced recommendation system using Neural Collaborative Filtering (NCF) and Variational Autoencoders (VAEs). We leverage a large-scale rating dataset containing user ID, movie ID, rating, and timestamp attributes to train and test our recommendation system. Our investigation focuses on combining traditional collaborative filtering and matrix factorization techniques with advanced deep learning-based techniques, such as NCF and VAEs, to build our recommendation system. We evaluate the performance of each model using multiple evaluation metrics, including precision, recall, and F1-score and compare their performance to decide on the best model based on recommendation quality, evaluation metrics, and time efficiency.

​

FUTURE WORK

Our study presents various opportunities for future work in movie recommendation systems. First, we could explore the combination of collaborative filtering and content-based recommendation systems to create a hybrid model that leverages the strengths of both approaches. This could potentially improve the accuracy of recommendations by incorporating additional features such as movie genre, director, and actors, which can be extracted from the movie metadata.

 

Incorporating contextual information: In this project, we only used user-item interaction data for the recommendation. However, incorporating contextual information such as time, location, and user demographics could improve the quality of recommendations.

 

Exploration of Deep Learning models: Although we compared the performance of two deep learning models in this project, there are many other deep learning models that could be explored, such as Convolutional Neural Networks (CNNs) and Transformer-based models.

 

Explainability and transparency: Recommendation systems can sometimes be seen as a "black box" where users do not know how the system arrives at its recommendations. Future work could focus on developing more transparent and explainable models to increase user trust and confidence in the recommendations.

 

Active Learning: The project evaluated different recommendation algorithms based on the accuracy of their predictions. However, accuracy alone may not always be sufficient for some applications. Future work could investigate active learning techniques to optimize the trade-off between recommendation accuracy and diversity.

 

Privacy and Security: With increasing concerns about data privacy and security, future work could focus on developing more privacy-preserving and secure recommendation systems, such as Federated Learning, Differential Privacy, and Secure Multi-Party Computation.

 

Multi-Objective Optimization: In this project, we mainly focused on optimizing for the accuracy of recommendations. However, there are often multiple objectives that need to be considered, such as diversity, novelty, and serendipity. Future work could explore multi-objective optimization techniques to balance these competing objectives with the system.

Image by Allec Gomes

Contact Information

Intelligent Systems Program, University of Pittsburgh

6127 Sennott Square
210 South Bouquet Street
Pittsburgh, PA 15260

412-253-3710

  • LinkedIn
Image by Shapelined

Thanks for submitting!

©2023 by Bamidele Ajisogun. Powered and secured by Bamidele

bottom of page