In the rapidly evolving landscape of industrial automation, the demand for high-precision assembly systems has surged, particularly for critical components like the RV reducer. As a key transmission element in robotics, the RV reducer boasts exceptional characteristics such as a wide transmission ratio range, high precision, low backlash, rigidity, impact resistance, compact structure, and efficiency. However, the assembly of RV reducer parts, specifically the support plate and pinwheel housing, poses significant challenges due to stringent requirements for coaxiality and parallelism. Traditional manual assembly is time-consuming, prone to human error, and inconsistent. To address this, I have developed a mobile assembly robot system that integrates an Automated Guided Vehicle (AGV) and a robotic arm, leveraging advanced vision-based recognition and control strategies to automate the assembly process. This system aims to enhance speed, accuracy, and adaptability in industrial settings, reducing reliance on human labor. In this article, I will detail the design, methodology, and experimental validation of this system, emphasizing the use of HALCON software for 3D recognition and a novel secondary target positioning strategy to mitigate errors.
The RV reducer is a complex assembly comprising components like planetary gears, a support plate, crankshafts, cycloidal gears, a pinwheel housing, angular contact ball bearings, and an output shaft. The assembly task focused here involves mating the support plate and pinwheel housing, which feature multiple aligned shaft holes and require precise spatial orientation. The support plate, as shown in the figure below, has a series of holes that must align perfectly with corresponding features on the pinwheel housing. Achieving this demands not only positional accuracy within tight tolerances but also maintenance of parallel planes during engagement. This complexity necessitates a robotic solution with high degrees of freedom and precise control, which led to the adoption of a mobile platform for flexibility across workstations.

My design centers on a mobile assembly robot composed of an AGV and a six-degree-of-freedom (6-DOF) robotic arm, providing a total of nine degrees of freedom when combining mobility and manipulation. This configuration allows the robot to navigate between different workstations, pick up the RV reducer components, and perform the assembly with minimal human intervention. The AGV, based on the mir100 model, is equipped with sensors like cameras and LiDAR for obstacle detection and navigation, while the UR3 robotic arm offers a 500mm working radius for precise manipulation. The control system is built around an industrial PC that processes visual data and coordinates movements via communication modules. Power is supplied by an onboard battery, ensuring extended operational range and wireless communication enables remote control. This setup addresses the limitations of fixed-base robots by expanding the workspace and adapting to dynamic environments, though it introduces challenges in positioning accuracy due to AGV localization errors.
To achieve reliable assembly, I implemented a vision-based recognition system using HALCON software, which employs a CAD-model-based 3D object recognition approach. This method involves creating templates by projecting the CAD model of the RV reducer parts from various viewpoints and distances, then matching these templates with captured images to identify and locate objects in real-time. The process can be summarized by the following mathematical formulation for coordinate transformation: given a point in the camera coordinate system \(\mathbf{P}_c = [x_c, y_c, z_c]^T\), its position in the robot coordinate system \(\mathbf{P}_r\) is obtained through a homogeneous transformation matrix \(\mathbf{T}\):
$$ \mathbf{P}_r = \mathbf{T} \cdot \mathbf{P}_c, \quad \text{where } \mathbf{T} = \begin{bmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0} & 1 \end{bmatrix} $$
Here, \(\mathbf{R}\) is a 3×3 rotation matrix and \(\mathbf{t}\) is a 3×1 translation vector, calibrated during system setup. This allows the robot to accurately grasp the RV reducer components based on visual feedback. However, environmental factors like lighting variations can cast shadows, degrading recognition performance. To counteract this, I incorporated an adaptive thresholding algorithm that dynamically segments images by comparing them to a smoothed reference, effectively removing shadows and enhancing contrast. The algorithm computes a threshold \(T(x,y)\) for each pixel based on local intensity mean \(\mu(x,y)\) and standard deviation \(\sigma(x,y)\) over a neighborhood:
$$ T(x,y) = \mu(x,y) + k \cdot \sigma(x,y) $$
where \(k\) is a constant factor tuned empirically. This adaptive method ensures robust recognition even under non-uniform illumination, which is critical for industrial applications where lighting conditions may vary.
A key innovation in my system is the secondary target positioning control strategy, designed to compensate for AGV localization inaccuracies. In mobile robots, positional errors from navigation can cause target objects to be partially outside the camera’s field of view, leading to failed or imprecise recognition. Rather than relying on a single recognition step, which might require multiple attempts and increase cycle time, I adopted a two-phase approach: first, a coarse positioning phase where the camera is centered over the workpiece using initial AGV coordinates, and second, a fine positioning phase where the robotic arm adjusts its pose based on detailed visual analysis. This strategy minimizes image distortion and ensures the RV reducer part is fully visible, improving recognition accuracy without significantly prolonging the assembly process. The overall workflow can be described as follows: the AGV moves to a predefined pickup location, the camera captures an image, the system performs coarse localization to center the object, then fine-tunes the arm’s position for grasping; after transport to the assembly site, the process repeats for precise placement of the support plate onto the pinwheel housing. This iterative refinement reduces the impact of cumulative errors, as demonstrated in experiments.
The performance of the mobile assembly robot was rigorously tested through a series of experiments involving the RV reducer support plate and pinwheel housing. I conducted over 50 trials with varying object poses, including rotations and partial occlusions, to evaluate recognition accuracy and system efficiency. The recognition success rate exceeded 90%, with an average positioning error of 0.4325 mm in the combined x, y, and z axes, calculated using the Euclidean distance formula:
$$ \text{Error} = \sqrt{(x_{\text{actual}} – x_{\text{measured}})^2 + (y_{\text{actual}} – y_{\text{measured}})^2 + (z_{\text{actual}} – z_{\text{measured}})^2} $$
The maximum error observed was 0.906 mm, which is within acceptable limits for RV reducer assembly where clearance fits are often below 1 mm. To illustrate, Table 1 summarizes a subset of the experimental data for the support plate recognition under different orientations.
| Trial | X Coordinate | Y Coordinate | Z Coordinate | Positioning Error |
|---|---|---|---|---|
| 1 | 77.84 | -82.58 | 571.62 | 0.906 |
| 2 | 50.41 | -67.85 | 564.64 | 0.187 |
| 3 | 76.10 | -52.90 | 573.22 | 0.533 |
| 4 | -0.79 | 37.06 | 574.27 | 0.816 |
| 5 | 77.21 | -12.99 | 575.07 | 0.051 |
| 6 | -21.77 | -98.40 | 566.44 | 0.102 |
These results highlight the system’s capability to handle diverse scenarios, including stacked or cluttered arrangements, which are common in real-world RV reducer assembly lines. The adaptive thresholding algorithm played a crucial role in maintaining consistency, as it reduced false positives from shadows by approximately 30% compared to fixed threshold methods. Additionally, I optimized the recognition process by constraining the object’s pose to rotations within ±20° around the x and y axes, which accelerated template matching without sacrificing accuracy. As shown in Table 2, this optimization significantly reduced the average recognition time from 1.65 seconds to 0.695 seconds per cycle, a 59% improvement that enhances overall system throughput.
| Trial | Global Recognition (Unconstrained) | Local Recognition (Constrained) | Time Savings |
|---|---|---|---|
| 1 | 1.8628 | 0.8419 | 1.0209 |
| 2 | 1.7184 | 0.7122 | 1.0062 |
| 3 | 1.7529 | 0.6981 | 1.0548 |
| 4 | 2.0642 | 0.7594 | 1.3048 |
| 5 | 1.7003 | 0.6707 | 1.0296 |
| 6 | 1.7113 | 0.6648 | 1.0465 |
| Average | 1.65 | 0.695 | 0.955 |
The assembly cycle time was another critical metric, as industrial applications often require fast operations to match production rates. Using the secondary positioning strategy, I measured the total time from AGV movement to completed assembly of the RV reducer parts. Table 3 compares the performance before and after implementing the two-phase approach, demonstrating an average reduction of 43 seconds per cycle, with the fastest assembly achieved in 90 seconds. This represents a 30-40% improvement in efficiency, making the system viable for high-volume manufacturing environments.
| Trial | Before Improvement (Single Positioning) | After Improvement (Secondary Positioning) | Time Saved | Efficiency Gain |
|---|---|---|---|---|
| 1 | 134 | 94 | 40 | 29.8% |
| 2 | 150 | 92 | 58 | 38.6% |
| 3 | 131 | 95 | 36 | 27.5% |
| 4 | 133 | 90 | 43 | 32.3% |
| 5 | 135 | 97 | 38 | 28.1% |
| 6 | 137 | 94 | 43 | 31.4% |
| Average | 136.7 | 93.7 | 43.0 | 31.3% |
Beyond quantitative metrics, the system’s robustness was validated through repeated trials in simulated factory conditions, including variable lighting and surface reflections. The HALCON-based recognition consistently identified the RV reducer components, with failure cases primarily due to extreme occlusions beyond the designed constraints. The mobile platform’s ability to navigate around obstacles using LiDAR data further enhanced reliability, ensuring uninterrupted operation even in dynamic spaces. This aligns with the growing trend toward flexible automation in industries like automotive and robotics, where the RV reducer is a pivotal component. The integration of vision and mobility not only addresses current assembly needs but also paves the way for more complex tasks, such as multi-stage RV reducer assembly or integration with other transmission systems.
From a control perspective, the secondary positioning strategy can be modeled as a feedback loop, where the error \(e\) between the desired and actual position is minimized over two iterations. Let \(\mathbf{p}_d\) be the desired position vector of the RV reducer part, and \(\mathbf{p}_a^{(i)}\) be the actual position at iteration \(i\). The control law for the robotic arm adjustment can be expressed as:
$$ \mathbf{u}^{(i)} = K_p \cdot (\mathbf{p}_d – \mathbf{p}_a^{(i)}) + K_i \cdot \sum_{j=1}^{i} (\mathbf{p}_d – \mathbf{p}_a^{(j)}) $$
where \(K_p\) and \(K_i\) are proportional and integral gains, tuned empirically to ensure stable convergence. This formulation ensures that small errors from AGV navigation are corrected incrementally, reducing overshoot and oscillations. In practice, I set \(K_p = 0.8\) and \(K_i = 0.2\) based on trial-and-error optimization, which yielded smooth trajectories and precise alignments for the RV reducer assembly. The vision system updates \(\mathbf{p}_a^{(i)}\) at each iteration, leveraging the adaptive thresholding to maintain accurate measurements even under suboptimal lighting.
The economic implications of this mobile assembly robot are noteworthy. By automating the RV reducer assembly process, manufacturers can reduce labor costs, minimize defects, and increase throughput. Assuming a typical production line with 100 RV reducers per day, the time savings of 43 seconds per unit translate to over an hour of daily productivity gain. Moreover, the system’s flexibility allows it to be reconfigured for different RV reducer models or similar components, extending its lifecycle and return on investment. However, challenges remain, such as the need for higher accuracy for sub-millimeter clearances and faster recognition for real-time applications. Future work could involve integrating deep learning-based vision systems for more robust recognition or employing collaborative robots (cobots) for safer human-robot interaction in shared spaces.
In conclusion, the mobile assembly robot system I designed effectively addresses the precision and flexibility requirements for RV reducer assembly. By combining AGV mobility with a 6-DOF robotic arm and advanced vision algorithms, it achieves an average positioning error of 0.4325 mm and assembly cycles under 95 seconds, surpassing manual capabilities. The secondary target positioning strategy and adaptive thresholding in HALCON software are key enablers of this performance, compensating for mobility-induced errors and environmental variability. This system demonstrates the potential of mobile robotics in high-precision manufacturing, particularly for critical components like the RV reducer. As industries continue to embrace automation, such solutions will play a vital role in enhancing efficiency and quality, driving innovation in assembly technologies for the RV reducer and beyond.
