I am currently a Data Scientist at Nokia. I've done my PhD at the University of Warwick at Signal and Information Processing (SIP) Lab supervised by Prof. Tanaya Guha and Victor Sanchez. She is the best supervisor I've ever seen. She showed me the magnificent and fruitful possibilities of a PhD life. My research is applying graph convolution networks to machine learning task (mainly multi-modal) on different applications such as affect learning. During my PhD, I joined DeepMirror and Intel AI lab as an intern. In DeepMirror, I applied graph-based models on life-science data such as chemical, proteins, and RNA data to name a few. For more info, you can check the projects here. I also have collaboration with Google Research and Intel AI Lab.
Prior to Warwick, I studied MSC and BSC at the University of Tehran in the ECE Department, Iran. In master, I worked on using deep learning models to predict the stock's price. However, thorough my master, I've implemented and designed many deep learning models for different tasks.
I received my diploma in Physics and Mathematics discipline from Shahid Beheshti, under the supervision of NODET(National Organization for Developing Exceptional Talents).
I was born in a beautiful city, Shahrekord in Iran (you can see some photos here). I’ve spent 25 years of my life in Iran which has imparted great skills and memories.
Nokia, Reading, UK
DeepMirror, Cambrige, UK
Intel AI Lab, San Diego California
For code checkout, please visit here
Emotion analysis through facial expression recognition has become ubiquitous with the widespread use of mobile and web based in- teractive technologies such as conversational agents, mental health monitoring and remote learning. A major criticism for the technologies using facial expression recognition is that they jeopardise users’ privacy by leaking identity-related information. In order to employ emotion recognition models, sensitive facial in- formation revealing identity is often transmitted from users device to databases and cloud servers. This raises issues of unintended and unethical use of identity information without users consent or knowledge. Furthermore, high variability in facial shape and ex- pression productions across age, gender, ethnicity may affect how efficiently a network recognizes emotion, and therefore can intro- duce bias in the results. We propose to address the challenge by separating identity information from facial expressions so as to hide an individual’s identity while preserving their expressions for ac- curate emotion analysis
We propose a generalized approach to emotion recognition that can adapt across modalities by modeling dynamic data as structured graphs. To alleviate the problem of optimal graph construction, we cast this as a joint graph learning and classification task. To this end, we present the Learnable Graph Inception Network (L-GrIN) that jointly learns to recognize emotion and to identify the underlying graph structure in the dynamic data.
Epilepsy is a common neurological disorder, charactererized by abnormal firing of neurons. Magnetic Resonance Imaging (MRI) techniques can be integrated with machine learning methods to diagnose epileptic patients noninvasively. In this study, we use structural MRI data of 17 subjects (10 epileptic patients and 7 normal control subjects) and segment brain tissues using a Gram-Schmidt orthogonalization method and a unified tissue segmentation approach. We then compute first-order statistical and volumetric gray-level cooccurrence matrix (GLCM) texture features and train SVM classifiers for epilepsy diagnosis based on the features of the whole brain or those of the hippocampus. We achieve an accuracy of 94% using the unified segmentation method and whole-brain analysis approach.
Analyzing emotion from visual cues, such as facial expressions is critical for intelligent human-centric systems. The majority of existing work on facial emotion recognition uses raw images or videos as inputs. Different from previous works, we adopt a graph approach to video-based facial emotion recognition to develop an accurate, scalable and compact architecture. We cast the problem as a joint graph learning and classification task. To this end, we propose a novel graph convolution network that jointly learns to recognize emotion and to identify the underlying graph structure.
Generative models for text have substantially contributed to tasks like machine translation and language modeling, using maximum likelihood optimization (MLE). However, for creative text generation, where multiple outputs are possible and originality and uniqueness are encouraged, MLE falls short. Methods optimized for MLE lead to outputs that can be generic, repetitive and incoherent. In this work, we use a Generative Adversarial Network framework to alleviate this problem. In thi project, I trained the model on some of persian poets and let them model to generate new ones.
In this project we implemented a simple deep learning network to extract features from spectral domain (we converted each time series to a spectral domain) based on this paper. The processed time series is then feed into a Neural Network for prediction. In this project, we proposed a new loss function for prediction. For more information read the detailed report.
IoT or Internet of Things is a scenario in which devices around you can send data over a network without your direct involvement. When you use Tank altitude control for checking liquid levels, this device measure the level of your liquid using ultrasound module and sends out data to our website database uisng WiFi module. The website was designed using matlab. You can find how this project works Here.