top of page
Search
manasis8

First week-- real-world application of ML!

Summary

Today was my first day working as a Research Intern in the Autonomous Systems team at the Nissan-Renault-Mitsubishi Alliance Innovation Lab! I was introduced to the various team members, scoped out my project, and even made some early progress on it. I also received a lovely tour of the Nissan garage and demo vehicles as well as the Gran Turismo PlayStation setup we had at the office.


Here is a picture of me from my first day and the car the company gave me for the summer!





Project Description

My project for the summer will be: LiDAR point-cloud classification and object orientation detection. Essentially, the current approach for object classification in the Nissan perception pipeline is trivial, largely heuristics-based and is coupled to the tracked-object detection system (the assumption being that moving objects that are consistent in multiple frames are essentially cars). This model will decouple this dependence and accurately allow downstream protocols to be selectively employed. The model will be deployed in Nissan Autonomous Vehicles starting October '22, which I am absolutely thrilled about!

I am working most directly with Atsu Kobashi and my manager Chris Ostafew. As I mentioned in my previous post, I chose these internship specifically because of its research-focus (scoping out the literature to find and deploy an apt model + architecture for application in the real-world).


Summer benchmarks

As part of Week 1, I set benchmarks for my project for the summer-- for example, by Week 4, I would ideally like to have a first draft of the model trained, in order to iterate and improve upon its performance. This means that I would have to have done an initial literature review and set up the model codebase starting between Week 2-3.



Some thoughts on AI in general...

Two real distinctions I noticed about applying AI/ML in the real world were that:

1) Data truly is messier than the curated datasets (*cough* ImageNet, COCO, CIFAR *cough*) I was so used to interacting with. I played back numerous ROS bagfiles which contain the 3D LiDAR point data and I found that half the time, points trailed off the ends of cars and vegetation, and in some timeframes and from certain angles, they would not even resemble those objects.

2) There is a paramount focus on time and optimization. The classification result MUST be produced in a 10 Hz timeframe (which is the frequency the LiDAR processor reads data in at) to be useful in the AV pipeline, so performance and optimization trade-offs have to be considered constantly. I never had to reckon with a hard time-limit while training before (in university research, it always seemed that the motto was "the faster the better", but more time-consuming algorithmic architectures were often excused in the focus on model performance).



New skills to learn!

This will be my first time working with ROS and bagfiles, which are frequently occurring in the field of robotics and AV especially. I will also be working in a company environment for the first time (applying my skills to problems in the real-world), which is very exciting, especially because my model will be deployed at the end of the internship.



For next time

Next time, I will do a quick overview of the literature review I performed and hopefully will have some more progress about building the model codebase!

19 views0 comments

Comments


Post: Blog2_Post
bottom of page