Field and service robots have been introduced to various fields in both indoor and outdoor environments already for decades, either by developing special robots or retrofitting existing working machines and simultaneously developing relevant algorithms and technologies. One of the absolute success stories has been taking place in the field of arable farming. If we consider large open fields, common for example in the USA and Australia, it is easy to understand why. When compared to many other important operational domains, like forests or mines, an open field with mainly flat surface is a friendly environment. Robotizing the farming machines brings many easily quantified benefits, for example in the form of reduced employment costs, solving the need to find enough trained workforce, fulfilling the need to produce more with less and tools to achieve future strict emission targets. There are many large players, including some relevant Original equipment manufacturers (OEMs), working actively in this field. It does not feel very productive for smaller companies, like GIM Robotics, to try to promote our 3D-localization, environmental modeling, and situational awareness solutions for at least partly matured use cases. It makes much more sense to tackle some specific problems, which are especially relevant in an average European farming landscape, which consists mostly of smaller fields with irregular shapes. In this blog post, we will present our approach and focus on two crucial pieces of this puzzle: 1) how to survive in a specific corner case linked to well-known satellite localization challenges, and 2) how to solve certain traversability related questions when a machine is operating partly among crops or other plants.

Figure 1: Arable farming. Combine and tractor form the core of the machine chain for arable farming. Several manufacturers are introducing their robotized versions. (Shutterstock/Petar Lackovic)


Having a tractor or a combine driving autonomously straight line performing common work tasks is, as they say, a no-brainer. To do that, the machine basically needs only to be able to follow a predefined path. The path can be a trajectory drawn on a map or in some cases, the path can even be formed by larger plants planted in a row. In the former case, the machine usually uses a RTK-GNSS system (open wide field, no problem) and it might have an additional odometry + IMU backup, just to be on a safe side. In the latter case, the machine needs only a way to detect the row of plants (whatever they might be) in front of it. To have an idea where it is in respect to that row, odometry information from the wheels is useful, although not vital for the operation. In addition to the path following capability, the machine must be able to detect any major obstacles on its path. If the working area is somehow constrained (fences, distant location, etc.), meaning that the presence of a human is very unlikely, and the overall height of the plants is moderate, the capability can be achieved with a simple and cheap sensory set consisting, for example, of a standard camera system.


If the system is working in a normal farm environment, which is situated close to the living quarters, pastures, roads, villages or similar, things start to get interesting. Depending on the safety rules and regulations, the machine must understand the environment better, especially when it comes to humans and larger animals. To do that, a more sophisticated sensory system is required. It should be able to detect humans in any case regardless of its position and the height of the plants. There are no strict standards for these cases, but the safest approach is to make sure that the autonomous machine would be at least as safe as it is in the hands of an experienced human operator. At this point some LiDARs, radars, infrared and stereo cameras etc. should be included in the system. For further information about cameras, LiDAR and radar-based systems’ capabilities to detect humans, we recommend the doctoral thesis of one of our senior staff members Dr. Juhana Ahtiainen. With that kind of set-up, the perception system can provide the needed situational awareness for the machine. It can detect, classify, and track relevant objects and provide information for the needed risk analysis software. With modern AI methods, the system should be able to adapt and gradually even learn to improve its performance in similar operations in the future.


Robotic engineer doesn’t have to be an application expert, but it will not hurt. If the engineer has been born on a farm and has been driving relevant machines before learning to walk, you can be sure that he knows all the details about the domain. We have that engineer, a specialist, M.Sc Mikko Seppälä. It is assuring that you can always turn to him and ask three main questions: a) Could it be done? b) Should it be done? c) How expensive will it be to implement? If Mikko feels positive about the case, we can say yes to our client and start working. After initial design and development phases, we can pack our hardware and head to his farm.  His machines are not from this decade, most of them are not even from the previous one, but Mikko knows them inside out. We can do fast prototyping and collect bags full of data for later algorithm and software development, testing and validation before even touching our NDA protected clients’ machines. Mikko’s farm includes plenty of different types of fields, which are providing more than enough use cases for us and for our customers.

Figure 2: Not quite the year 2022 model, but Mikko’s red beauty gets the job done. Made for rapid prototyping and data collection.


As mentioned earlier, standard cases are currently basic stuff for most of the mobile robotics and top-of-the-line agricultural machine manufacturing companies. Both the localization and situational awareness capabilities are available for the machine and current drive-by-wire level agriculture machines can be controlled with widely available path following and obstacle avoidance algorithms for open field operations. The real challenge comes when you want to minimize the man-machine interaction during the full working cycles in various agricultural operations. To have an operator monitoring continuously, on site or via teleoperation, the performance of a single machine provides very little savings compared to normal manual operation. Having several machines monitored by one operator makes naturally more sense. Even less savings is available, when sections of the working cycle must always be done manually for various reasons. These include for example the phase when the machine is coming out from the garage and moves from the farmyard to the open field. Or when the machine moves from one field to another via a forest road surrounded with tall trees. In those cases, machines relying solely on RTK-GNSS based solutions, cannot always operate fully autonomously due to the lack of satellite coverage.

Video 1: Tractor leaving from the garage and moving through the farmyard. Although not so serious in this case, these parts of the work cycle can cause problems for pure GNSS-based localisation solutions.


Our 3D positioning and mapping module fuses multiple sensor modalities to ensure pose estimate reliability in all situations. Typically, we use point cloud data, satellite positioning (if available), accelerations, angular velocities, and wheel revolutions. Normally laser scanners are used to gather the point cloud data but any sensor that can produce point clouds can be used with our system. With our GIM-Mapper, we can create a precise 3D representation of the environment. Depending on the use case, the machine and especially the sensory setup, the accuracy of the map can be adjusted accordingly. With that kind of setup, we can produce 3D environmental modeling, mapping and even surface modeling. GIM-Mapper needs our GIM-Locator to operate, because to map with a mobile machine, you obviously need to know how you have been moving and where you have been at any given time. And to make any machine capable of driving autonomously, you need to have a real-time situational awareness to provide the machine capability to understand what is going on around it, now and in the near future. Our GIM-Observer will provide that info for your machine. Although our Locator, Mapper, and Observer trio will naturally absorb all the information, which is helping to fulfill the mission objectives, the situations where its value becomes extremely clear to everybody, are those cases when the GNSS signal will be denied for one reason or another. You can read more about our solutions from this blog post.


Terrain traversability analysis is used to generate traversability maps of the environment that quantify the difficulty an autonomous mobile robot would encounter in passing through a particular region. Traversability maps are typically platform dependent since the locomotion capabilities of different-sized platforms may differ significantly. The essence of traversability analysis is deducing whether an area is traversable or not, given the platform constraints and sensor data. This is an easy task in a simple structured environment; however, estimating terrain traversability in natural off-road environments is extremely challenging due to the terrain’s highly variable appearance and geometric properties. For example, vegetation is often interpreted as obstacles by state-of-the-art methods, even though in practice it is often possible to drive through sparse vegetation. Conversely, this is also a challenging task for human drivers since we cannot always see obstacles behind vegetation. Human drivers typically tend to adjust their speed in situations with limited visibility of the area ahead so they can stop if they detect an obstacle. This could, however, lead to avoiding even traversable vegetated areas altogether. Nonetheless, a representative spatial model of the surrounding environment is needed to generate traversability maps. This part uses our GIM-Locator and GIM-Mapper modules. Vegetation classification and traversability analysis within vegetated areas should always receive special attention, since traversing in vegetated environments is particularly challenging for autonomous machines working in agricultural domains. Most of the current traversability analysis methods consider obstacles as rigid and static; these fail to deal with vegetation-like obstacles. This problem has been approached by classifying vegetation to distinguish it from other types of obstacles.

Video 2: Plane fitting algorithm used to detect a swath of straws. Red points represent the ground plane, blue points are “positive detection”, green are “negative detection”, and lime yellow/green are “non-return points” including for example the exhaust pipe visible in the front. The actual magic happens when our in-house algorithms extract those 10-15 cm tall swaths of straw.

Our main expertise in this context comes from the doctoral research conducted in GIM Robotics by our senior staff member Dr. Juhana Ahtiainen, Lead Robotics Engineer, Team Leader (Positioning and Mapping). His aforementioned thesis was titled as “Safe Navigation for Unmanned Ground Vehicles: Novel Methods for Terrain Traversability Analysis and Human Detection”. The main result from his work was to describe the novel terrain traversability analysis methods developed for unstructured environments. It presented a novel, efficient representation of traversability mapping and proposed two new approaches for traversability classification that exploited this representation. Furthermore, it presented two innovative methods for augmenting traversability with ultra-wideband (UWB) radar data. Since UWB radars can penetrate some amount of vegetation, the developed methods enabled the clearance of obstacle-free vegetation (an area of vegetation that could be driven through) from the generated traversability maps, which was not possible at the time of publication of the thesis in 2017. Since the publication, we have been implementing various projects, where we have tackled the traversability related problems with field and service robots.

Figure 3: Field and service robots are here to stay.


GIM Robotics continues its journey to become a serious player in the wider field of agriculture. We do that by showing our expertise in the current and future proof-of-concepts, pilots, projects and productizations. If you want to learn more about our offerings, give us a call, or use the contact form. We are happy to tell you how we can help relevant stakeholders to flourish in the future, when the field is moving even faster towards precision farming.

It makes much more sense to be a pioneer than a follower.

More posts


SAVING LIVES Together with Škoda Group, GIM Robotics is making tomorrow’s city traffic safer and more efficient. ANTI-COLLISION SYSTEM Tram drivers work long hours in all-weather conditions – also when the nights get dark. Therefore

Read More


BRINGING IN THE A-TEAM GIM Robotics has been evolving steadily since it was founded in 2014 after three glorious decades at Helsinki University of Technology (later Aalto University). We are getting to that point, where

Read More


COMPANIES WORKING IN AUTONOMY AND SMART TECHNOLOGIES HAVE JOINED FORCES VAMOS Ecosystem is a Finnish innovation cluster that focuses on autonomous mobility in smart spaces. It brings together companies that jointly collaborate, compete, and create

Read More


LOGISTICS IS JUST ABOUT MOVING SOMETHING FROM ONE PLACE TO ANOTHER, RIGHT? Out of all relevant industrial domains for hardcore mobile robotics engineers, it is very difficult to find anything more extensive than logistics. The

Read More

European Robotics Forum

PROPELLER HEADS, SUITS, AND BUREAUCRATS European Robotics Forum 2022, 28-30 June 2022, Rotterdam, Ahoy Centre GIM Robotics ends its spring/early summer European Tour with a bang at ERF2022. Our guys have been zigzagging across Europe

Read More

Our website uses cookies to get the best user experience. By continuing to use the site, you accept cookies. Read more