GLOBAL RESEARCH SYNDICATE
No Result
View All Result
  • Login
  • Latest News
  • Consumer Research
  • Survey Research
  • Marketing Research
  • Industry Research
  • Data Collection
  • More
    • Data Analysis
    • Market Insights
  • Latest News
  • Consumer Research
  • Survey Research
  • Marketing Research
  • Industry Research
  • Data Collection
  • More
    • Data Analysis
    • Market Insights
No Result
View All Result
globalresearchsyndicate
No Result
View All Result
Home Data Analysis

Power Your ML and AI Efforts with Data Transformation – Thought Leaders

globalresearchsyndicate by globalresearchsyndicate
May 28, 2020
in Data Analysis
0
Power Your ML and AI Efforts with Data Transformation – Thought Leaders
0
SHARES
7
VIEWS
Share on FacebookShare on Twitter

Julien Rebetez, is the Lead Software & Machine Learning Engineer at Picterra.  Picterra provides a geospatial cloud-based-platform specially designed for training deep learning based detectors, quickly and securely.

Without a single line of code and with only few human-made annotations, Picterra’s users build and deploy unique actionable and ready to use deep learning models.

It automates the analysis of satellite and aerial imagery, enabling users to identify objects and patterns.

What is it that attracted you to machine learning and AI?

I started programming because I wanted to make video games and got interested in computer graphics at first. This led me to computer vision, which is kind of the reverse process where instead of having the computer create a fake environment, you have it perceive the real environment. During my studies, I took some Machine Learning courses and I got interested in the computer vision angle of it. I think what’s interesting about ML is that it’s at the intersection between software engineering, algorithms and math and it still feels kind of magical when it works.

 

You’ve been working on using machine learning to analyze satellite image for many years now. What was your first project?

My first exposure to satellite imagery was the Terra-i project (to detect deforestation) and I worked on it during my studies. I was amazed at the amount of freely available satellite data that is produced by the various space agencies (NASA, ESA, etc…). You can get regular images of the planet for free every day or so and this is a great resource for many scientific applications.

 

Could you share more details regarding the “Terra-i” project?

The Terra-i project (http://terra-i.org/terra-i.html) was started by Professor Andrez Perez-Uribe, from HEIG-VD (Switzerland) and is now led by Louis Reymondin, from CIAT (Colombia). The idea of the project is to detect deforestation using freely available satellite images. At the time, we worked with MODIS imagery (250m pixel resolution) because it provided a uniform and predictable coverage (both spatially and temporally). We would get a measurement for each pixel every few days and from this time series of measurements, you can try to detect anomalies or novelties as we call them in ML sometimes.

This project was very interesting because the amount of data was a challenge at the time and there was also some software engineering involved to make it work on multiple computers and so on. From the ML side, it used Bayesian Neural Network (not very deep at the time 🙂 ) to predict what the time series of a pixel should look like. If the measurement didn’t match the prediction, then we would have an anomaly.

As part of this project, I also worked on cloud removal. We took a traditional signal processing approach there, where you have a time series of measurements and some of them will be completely off because of a cloud. We used a fourier-based approach (HANTS) to clean the time series before detecting novelties in it. One of the difficulties is that if we would clean it too strongly, we’d also remove novelties, so there were quite some experiments to do to find the right parameters.

 

You also designed and implemented a deep learning system for automatic crop type classification from aerial (drone) imagery of farm fields. What were the main challenges at the time?

This was my first real exposure to Deep Learning. At the time, I think the main challenge were more on getting the framework to run and properly use a GPU than on the ML itself. We used Theano, which was one of the ancestors of Tensorflow.

The goal of the project was to classify the type of crop in a field, from drone imagery. We tried an approach where the Deep Learning Model was using color histograms as inputs as opposed to just the raw image. To make this work reasonably quickly, I remember having to implement a custom Theano layer, all the way to some CUDA code. That was a great learning experience at the time and a good way to dig a bit into the technical details of Deep Learning.

 

You’re officially the Lead Software and Machine Learning Engineer at Picterra. How would you best describe your day to day activities?

It really varies, but a lot of it is about keeping an eye on the overall architecture of the system and the product in general and communicating with the various stakeholders. Although ML is at the core of our business, you quickly realize that most of the time is not spent on ML itself, but all the things around it: data management, infrastructure, UI/UX, prototyping, understanding users, etc… This is quite a change from Academia or previous experience in bigger companies where you are much more focused on a specific problem.

What’s interesting about Picterra is that we not only run Deep Learning Models for users, but we actually allow them to train their own. That is different from a lot of the typical ML workflows where you have the ML team train a model and then publish it to production. What this means is that we cannot manually play with the training parameters as you often do. We have to find some training method that will work for all of our users. This led us to create what we call our ‘experiment framework’, which is a big repository of datasets that simulates the training data our users would build on the platform. We can then easily test changes to our training methodology against these datasets and evaluate if they help or not. So instead of evaluating a single model, we are more evaluating an architecture + training methodology.

The other challenge is that our users are not ML practitioners, so they don’t necessarily know what a training set is, what a label is and so on. Building a UI to allow non-ML practitioners to build datasets and train ML models is a constant challenge and there is a lot of back-and-forth between the UX and ML teams to make sure we guide users in the right direction.

 

Some of your responsibilities include prototyping new ideas and technologies. What are some of the more interesting projects that you have worked on?

I think the most interesting one at Picterra was the Custom Detector prototype. 1.5 years ago, we had ‘built-in’ detectors on the platform: those were detectors that we trained ourselves and made accessible to users. For example, we had a building detector, a car detector, etc…

This is actually the typical ML workflow: you have some ML engineer develop a model for a specific case and then you serve it to your clients.

But we wanted to do something differently and push the boundaries a bit. So we said: “What if we allow users to train their own models directly on the platform” ? There were a few challenges to make this work: first, we didn’t want this to take multiple hours. If you want to keep this feeling of interactivity, training should take a few minutes at most. Second, we didn’t want to require thousands of annotations, which is typically what you need for large Deep Learning models.

So we started with a super simple model, did a bunch of tests in jupyter and then tried to integrate it in our platform and test the whole workflow, with a basic UI and so on. At first, it wasn’t working very well in most cases, but there were a few cases where it would work. This gave us hope and we started iterating on the training methodology and the model. After some months, we were able to reach a point where it worked well, and we now have our users using this all the time.

What was interesting about this is the double challenge of keeping the training fast (currently a few minutes) and therefore the model not too complex, but at the same time making it complex enough that it works and solves user’s problems. On top of that, it works with few (<100) labels for a lot of cases.

We also applied many of Google’s “Rules of Machine Learning”, in particular the ones about implementing the whole pipeline and metrics before starting to optimize the model. It puts you into ‘system thinking’ mode where you figure out that not all your problems should be handled by the core ML, but some of them can be pushed to the UI, some of them pre/post-processed, etc…

 

What are some of the machine learning technologies that are used at Picterra?

In production, we are currently using Pytorch to train & run our models. We are also using Tensorflow from time to time, for some specific models developed for clients. Other than that, it’s a pretty standard scientific Python stack (numpy, scipy) with some geospatial libraries (gdal) thrown in.

 

Can you discuss how Picterra works in the backend once someone uploads images and wishes to train the neural network to properly annotate objects?

Sure, so first when you upload an image, we process it and store it in a “Cloud-Optimized-Geotiff” (COG) format on our blobstore (Google Cloud Storage), which allows us to quickly access blocks of the image without having to download the whole image later on. This is a key point because geospatial imagery can be huge: we have users routinely working with 50000×50000 images.

So then, to train your model, you will have to create your training dataset through our web UI. You will do that by defining 3 types of areas:

  1. ‘training areas’, in which you will draw training labels
  2. ‘testing areas’, where the model will predict to let you visualize some results
  3. ‘accuracy area’, where you will draw labels as well, but those are not used for training, only for scoring

Once you have created this dataset, you can simply click ‘Train’ and we’ll train a detector for you. What happens next is that we enqueue a training job, have one of our GPU worker pick it up (new GPU workers are started automatically if there are many concurrent jobs), train your model, save its weights to the blobstore and finally predict in the ‘testing area’ to display on the UI. From there, you can iterate over your model. Typically, you’ll spot some mistakes in ‘testing areas’ and add ‘training areas’ to help the model improve.

Once you are happy with the score of your model, you can run it at scale. From the user’s point of view, this is really simple: just click on ‘Detect’ next to the image you want to run it on. But it’s a bit more involved under the hood if the image is large. To speed things up, handle failures and avoid having detections taking multiple hours, we break down large detections in grid cells and run an independent detection job for each cell. This allows us to run very large-scale detections. For example, we had a customer run detection over the whole country of Denmark on 25cm imagery, which is in the range of TB of data – for a single project. We’ve covered a similar project in this medium post.

 

Is there anything else that you would like to share about Picterra?

I think what’s great about Picterra is that it is a unique product, at the intersection between ML and Geospatial. What differentiates us from other companies that process geospatial data is that we equip our users with a self-serve platform. They can easily find locations, analyze patterns, and detect and count objects on Earth observation imagery. It would be impossible without machine learning, but our users don’t even need basic coding skills – the platform does the work based on a few human-made annotations. For those who want to go deeper and learn the core concepts of machine learning in the geospatial domain, we have launched a comprehensive online course.

What is also worth mentioning is that possible applications of Picterra are endless – detectors built on the platform have been used in city management, precision agriculture, forestry management, humanitarian and disaster risk management, farming, etc., just to name the most common applications. We are basically surprised every day by what our users are trying to do with our platform. You can give it a try and let us know how it worked on social media.

Thank you for the great interview and for sharing with us how powerful Picterra is, readers who wish to learn more should visit the Picterra website.

Related Posts

How Machine Learning has impacted Consumer Behaviour and Analysis
Consumer Research

How Machine Learning has impacted Consumer Behaviour and Analysis

January 4, 2024
Market Research The Ultimate Weapon for Business Success
Consumer Research

Market Research: The Ultimate Weapon for Business Success

June 22, 2023
Unveiling the Hidden Power of Market Research A Game Changer
Consumer Research

Unveiling the Hidden Power of Market Research: A Game Changer

June 2, 2023
7 Secrets of Market Research Gurus That Will Blow Your Mind
Consumer Research

7 Secrets of Market Research Gurus That Will Blow Your Mind

May 8, 2023
The Shocking Truth About Market Research Revealed!
Consumer Research

The Shocking Truth About Market Research: Revealed!

April 25, 2023
market research, primary research, secondary research, market research trends, market research news,
Consumer Research

Quantitative vs. Qualitative Research. How to choose the Right Research Method for Your Business Needs

March 14, 2023
Next Post
Green spaces ‘must be protected and enhanced’

Green spaces 'must be protected and enhanced'

Categories

  • Consumer Research
  • Data Analysis
  • Data Collection
  • Industry Research
  • Latest News
  • Market Insights
  • Marketing Research
  • Survey Research
  • Uncategorized

Recent Posts

  • Ipsos Revolutionizes the Global Market Research Landscape
  • How Machine Learning has impacted Consumer Behaviour and Analysis
  • Market Research: The Ultimate Weapon for Business Success
  • Privacy Policy
  • Terms of Use
  • Antispam
  • DMCA

Copyright © 2024 Globalresearchsyndicate.com

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT
No Result
View All Result
  • Latest News
  • Consumer Research
  • Survey Research
  • Marketing Research
  • Industry Research
  • Data Collection
  • More
    • Data Analysis
    • Market Insights

Copyright © 2024 Globalresearchsyndicate.com