I am a computer vision consultant and an electrical engineer by training with over 16 years of experience, of research and development, in industry and academia. I completed my Doctor of Engineering in 2015 with focus on computer vision applied to agriculture. My expertise includes mostly in applied research in computer vision, image processing, and machine learning for robotics, precision agriculture, 3D scanning and video surveillance applications using diﬀerent imaging sensors. I am co-inventor of a 3D scanning mechanism being used in da Vinci Color that is the world’s ﬁrst FFF full-color 3D printer by XYZPrinting Inc.
Currently, I am working as a faculty member at the Department of Mechatronics Engineering, NUST College of Electrical and Mechanical Engineering since 2016. I have won both national and international grants. Some of them are already completed and closed, a couple of them will be closed by June 2020. I am a Co-PI at Robots Design and Development Lab, National Center for Robotics and Automation. I won HEC-technology development fund grant named Pak Zar Zameen that evolved to a startup in precision agriculture services and incubated at Telenor Velocity program and hosted at NUST Incubation center. Since last 4 years, I have 6 journal publication, 9 conference presented papers, 1 US patent granted ( the same patent published at China and Europe) , and two local design patent. I have in total 110 citations with an H-index of 6.
I graduated as an electrical engineer with specialization in Electronics and communication in 2003. My professional carrier started as a quality control engineer for a government research and development organization. There I broaden by horizon as a embedded developer and engineer. During this period, I completed my master’s in computer engineering, which increased my research skills with an interest towards machine vision and machine learning. Prior to my move to AIT for my doctorate I worked as a teaching faculty for two years at Air University. My Doctor of Engineering thesis is oriented towards exploring automated analysis for visual cues for real-world monitoring task. The mentorship from my supervisors and co-supervisors helped me to learn critical thinking and analysis, to focus on solving real-world problems focusing on developing countries. My research emphasizes was in precision agriculture exploring automated analysis of visual and depth cues for crop monitoring, with tangents towards fire detection and moving target tracking. After my doctorate, I was hired by a 3D Printer and Scanning manufacturing company as a Research and development consultant. We had a long relation till Feb. 2020 with most of my time working for them remotely. My worked was mainly on developing products based on 3D reconstruction algorithms for RGB and depth cameras. We launched three commercial products XYZPrinting Full Colour Handheld scanner, XYZPhoto3D, and XYZPrinting Colour Scanner for Da Vinci Color printer. One of them was purely software application for mobile phone based on 3D photogrammetry. We filed a US, and EU patent on a better 3D Scanning method for built-in 3D colour scanner, which were granted on Dec 2019.
My carrier at National University of Sciences and Technology as a research and teaching faculty started in 2016. My emphasis in research here is mostly towards precision agriculture, multi-spectral sensing of crop field, and 3D printing and scanning applications. I won a startup research grant of 0.5 Million as a PI for 3D printing robotic arm for fruit harvesting from higher education commission in 2016. Later another grant of 0.5 research grant as Co-PI for aerial spraying drone. I worked with Aga Khan Foundation, and AIT Solutions for developing a 3D model of Baltit Fort using photogrammetry. My task was mainly to gather data and generate a high-resolution 3D model using 3D reconstruction techniques. Most of the capstone project that I offered to my final year students were either partially funded by industry, in collaboration with an international collaborator. I contributed in analysing visual features for leaf detection. A new five-step algorithm is presented (comprising image pre-processing, segmentation, feature extraction, dimensionality reduction, and classification steps) for recognition of plant type through leaf images. The new algorithm is evaluated on the publicly available standard dataset ‘Flavia’ of 1600 leaf images and on a self-collected dataset of 625 leaf images. Different Classifiers are tested with proposed algorithm for classification task such as k-nearest neighbour (KNN), decision tree, naïve Bayesian, and multi-support vector machines (SVM) and results reveal that the proposed algorithm can attain plant recognition accuracy of up to 98.75% with the Flavia and 97.25% with the self-collected dataset, while using KNN as classifier which makes it a useful approach for identification task.
I explored the use of near-infrared spectroscopy for fruit maturity in collaboration with Central Queensland University, Australia and Agriculture University Faisalabad. The project was funded by Pakistan Agriculture Research Council for a period of 2 years with an amount of 5.0 Million. Our task is to analyze the NIR spectroscopy for maturity estimation of mango varieties and develop two indigenous devices for maturity estimation, one using microarray, and another using special NIR-LEDs. We benched marked NIR spectrum of mature local varieties of mangoes for dry matter (DM), and sugar content (BRIX) with destructive testing and developed a mathematical model to predict DM and BRIX for local varieties. Meanwhile, the locally developed devices will be tested and benchmarked for DM and BRIX in the upcoming mango season. We are also exploring the use of such non-destructive mechanism for table grapes grown in Potohar region.
I explored the use of multispectral imagery from sentinel-2 and drones to develop a crop health advisory for straw crops. The project is funded by HEC technology development fund with a grant of 11.1 million. We developed a mobile application and API that take input from the farmers on the sowing date of the crop and gives advisory to the farmers. The current advisory is based on NDVI maps of the crop computed from the sentinel-2 satellite. The NDVI imagery is computed by a third party on the cloud and sent to our server, which uses the NDVI values and the information from the farmer to generate the advisory for different phases of the straw crops. The use of multi-spectral imagery from drones is also explored for a high-resolution crop health map. For now, we only benched marked the NDVI values with the crop health for the wheat, rice, and sugarcanes crop. In this project, we explored the use of aerial spraying using drones for sugarcane. The result show better control of whitefly on sugarcane as compared to manual method.
I explored the development of an algorithm for selfi-drone. The project was part of my post-PhD research at National Taiwan University of Science and Technology from July 2018 to Oct 2018. We developed an algorithm that use We develop a software system for selfie drone that eliminates the need for manual maneuvering of drone to required position. After careful review of the existing options, we decided to use DJI Phantom 4 as a selfie drone that is controlled through an RC transmitter by default. The RC transmitter is connected to a smartphone running the software application on Android. The user selects or provides a template for selfie and the drone takes-off automatically. The software locks the home position of the drone, detects the human face, and localizes the face and drone in a cluttered environment. It computes the position vector of the drone camera for desired selfie, flies the drone to the desired position vector and captures the selfie. Afterwards, the drone keeps hovering on the same location waiting for the next template or to fly back to the home position. We use DJI SDK to control the drone and support vector regression to calculate the position vector for capturing selfie image like the template given by the user. We quantitatively evaluate the regressor in a simulation developed using OpenGL and on actual images. We then evaluate the performance of the algorithm running on the mobile platform qualitatively: the regressor is evaluated on real images resulting in an accuracy of 80% and the images captured by the drone look like the desired template images.
As a responsible researcher during the COVID-19 lockdown period, I am collaborating with an old colleague working at Czech Technical University at Prague for a project. The core idea of the project is to minimize people’s potential exposure to infection by advising them to schedule their necessary trips considering the forecasted density of others at the places to visit. Project will combine principles known in medical science, artificial intelligence and chrono-robotics. We will combine spatio-temporal models (www.fremen.uk) that can predict future people’s densities at various locations with epidemiologic models, which can estimate the transmission and exposure risks. The resulting method will be able to predict the future risk of virus exposure at different locations and times, allowing the public to avoid these locations. This will lead to reduction of the risk of individual exposure and subsequently, to the reduction of the spread.