I'm a last year ECE Ph.D. candidate at UCSC. I did my bachelor's and master's at University of Tabriz in my beautiful hometown, Tabriz. I joined Applied Optics group in Jack Basking School of Engineering (JBSOE) under supervision of Prof. Holger Schmidt.

I have been lucky to study at UCSC with it's spectacular nature. When I don't research, I enjoy being outdoor and shooting photos. Check my small gallery section below.

Working with lasers and light in the dark is an irony and contrast to find new things!

# Programming

• Python
• MATLAB
• Verilog/VHDL/HLS
• C/C++
• C#
• Bash
• Lisp

# Python

• Numpy
• Scipy
• Pandas
• Matplotlib
• Tensorflow
• Scikit-Learn
• Scikit-Image
• HoloViews
• Socket
• Concurrent
• Multiprocessing

# Simulation

• FIMMWAVE/FIMMPROP
• Code V
• ANSYS
• COMSOL

• Altium Designer
• Eagle
• Inventor
• Fusion 360

# Embedded HW

• Xilinx FPGA
• Raspbery Pi
• Arduino
• PIC
• AXI-4
• Ethernet (TCP/IP)
• SPI/I2C

# Image [Processing]

• Andor Zyla sCMOS
• OpenCV/CV2
• PIL
• Scikit-Image
• ImageJ

# IDE

• VS Code
• Jupyter NB/Lab
• Visual Studio
• Eclipse
• MPLAB
• Spyder

# GUI/Instrumentation

• LabView
• PySide/PyQt
• DearPyGUI
• Tk/Tkinter
• Plotly Dash
• MATLAB App Designer
• Spyder
• Visa

• Inkscape
• GIMP
• Blender

# Optics/Fabrication

• Si-photonic chip characterization
• Optical setup design and build
• PDMS optofluidic chip fab.
• 3D printing
• PCB design and assembly

# Others

• Git/Github
• Linux/Mac OS
• MS Windows
• HTML/CSS
• PyPI
• LaTex
• Teamwork
• Brainstorming

# ML & AI-Accelerated Inferencing (on-going)

In this project, I have developed a Deep-Learning model to improve fluorescent event detection accuracy. I use TensorFlow library in Python to design, compile, and train a neural-network model for event classification purpose. The trained model is then transferred to the Edge-TPU accelerator hardware (Google Coral Dev Board) for real-time inferencing. I built the quantization-aware model from the scratch with required considerations to be compatible with Coral Dev Board. I also developed a pipeline to automate the process of dataset preparation for traning step. A Plotly Dash dashboard was implemented for real-time monitoring of results. To achieve real-time performance, I programmed an FPGA to bin photon events and transfer via Ethernet to the host machine.

I've been working on developing machine-learning model for image processing purposes for a couple of projects. I use convolutional neural networks (CNN) to extract coarse and fine features from input images to map into another space to do regression and/or classification. These are on-going projects and I will share more details in the near future.

# PCWA-Wavelet Event Detector

Optofluidic chips are developed to detect individual flowing targets within a fluidic medium. Integration of photonics and microfluidics optofluidics well-suited for single-molecule detection. In this research, I developed a novel event detection algorithm based on continuous wavelet transform (CWT) which is a powerful multiscale data analysis tool. A custom designed mother wavelet function is designed and applied on the raw binned input signal from single-photon-counting-module (SPCM) to improve detection rate and accuracy. The algorithm is highly parallel and can run much faster than previous CWT peak-finder algorithms, which can run in real-time for inline event detection and monitoring [ref].

PCWA algorithm have been used in various event detection applications in our lab. I have implemented this algorithm in a more complicated platform where real-time detection of events and concentration calculation is necessary. There is GUI (using DearPyGUI) developed to have inputs read from the user and visualize real-time insight about running experiment.

# Photonic WG Simulation

I have taken Photon Design training sessions to learn how to model and simulate different types of waveguides. I use FIMMWAVE for mode calculation and model list build. FIMMPROP is used to calculate mode overlaps and propagation along waveguide. This way we calculate coupling efficiency, loss, and mode confinement ath the desired location. Fig. 4 shows an example of design optimization. Here, I wrote a Python script to automate multiple variable sweeps and store S-Matrix outputs into a log file. We can then pick the sweat spot with tolerance considerations. The model contains a tapered/crevice section which was discovered in the actual fabricated devices by doing a focused ion beam (FIB) milling.

Fig. 5 shows another example in which I did the simulation for a designed multi-mode interference (MMI) waveguide to achieve a high contrast multi-spot pattern for multiplexed bio-sensing application. Interference pattern is calculated using FIMMPROP engine.

# MMI Mode Imaging

In this project, I have characterized various single-layer SiO2 low-index waveguides fabricated on anti-resonant reflecting optical waveguide (ARROW) layer to measure the quality of multi-spot interference pattern. To help with batch characterization, I have built a mode-imaging setup with a C# GUI software, where the top-down as well as side facet live images are displayed using two CMOS cameras. The software is also capable of communicating with NKT SuperK EXTREME supercontinuum white light laser and SuperK SELECT to tune and scan wavelengths.

To analyze recorded mode images, I developed an image processing program (Mode-Analyzer) to convert multi-spot facet images into a 1D array and then extract multiple features such as full-width-half-maximum (FWHM), peak-to-valley (P2V), etc. The program is written in Python and the GUI is developed in Tkinter.

Communication with NKT WLS provides the ability to scan through wavelengths and find the wavelengths where a clear MMI spot pattern forms. The scan is plotted in real-time as below:
1. NKT SelectK is being scanned within $\lambda_0$ to $\lambda_N$ with $N$ steps and 200ms delay
2. a frame from facet CMOS camera captured
3. inside a defined box (ROI) vertical integral of all pixel values are calculated and reduced to a 1D array ($L$)
4. the calculated 1D array at $\lambda_i$ is assigned to the scan 2D array at column $i$ \displaylines{ \begin{aligned} {\color{blue} for}\ & i\ {\color{blue}in}\ {\color{purple}range}({\color{tomato}N}):\\ &frame = {\color{blue}camera}.{\color{purple}grabframe}()\\ &{ L = {\color{purple}sum}(frame[{\color{tomato}roi}[0]:{\color{tomato}roi}[1],{\color{tomato}roi}[2]: {\color{tomato}roi}[3]],{\color{blue}axis}=1)}\\ &{ scan[i,:] = L}\\ &{{figure}.{\color{purple}draw}()} \end{aligned}}

# PDMS Fabrication

Polydimethylsiloxane (PDMS) is a biocompatible, flexible transparent elastomer which have been used in microfluidics and bio-sensors for decades. Desirable properties of this material for microfluidic applications combined with the optical properties control during fabrication, promotes PDMS as a favorable material for optofluidics. I have gained skills and experiences fabricating optofluidic devices using in-house cleanroom facility at BSOE. I use AutoCAD for photomask design and to ease the polyline verification as a design rule check (DRC) process for comercial mask-writer instruments, I have developed a lisp script called Poly Hatch.

# 3D Hydrodynamic Focusing

Hydrodynamic focusing (HDF) in fluidics help to confine the sample stream by introducing sheath streams surrounding it. With the help of sacrifical layers, 3D-HDF integration was achieved within the ARROW optofluidic chip.

The regular way of connecting outlet reservoirs for ARROW chips (gluing brass cylinders) to vacuum line becomes challenging when we deal with multiple outlet channels (sample channel outlet + buffer channels outlet). To solve this, I designed and made a 3D printed chip mount with following goals in mind:
• should hold the chip secure during the experiment
• should seal both inlet and outlet channels to prevent leakage and bubble formation
• should have minimal footprint and provide enough room for fiber, and objective lenses to couple to the chip
• should be adjustable and tolerate possible chip cleaving errors

# 3D Atomic Deposition

This simulation was done for EE-216 (Nanomaterials and Nanometer-scale Device with Prof. Nobuhiko P. Kobayashi) class where we have to simulate deposition of atoms on a predefince surface. We (Yucheng Li, Md Nafiz Amin, and me) used L-J potential model to calculate attractive and repulsive forces each atoms will feel when introducsed into the sample. The visualization and the tensor calculations are done in PyOpenGL and Numpy respectively. Furthur information are availabe at my github repository.

-: ""
- () - () (doi: )