Autonomous Planning and Analysis of Visual Measurement Behavior for Gear Shaft Axial Dimensions

In modern manufacturing, the gear shaft serves as a critical component in transmission systems, and its dimensional accuracy directly impacts the performance and reliability of mechanical equipment. As a complex shaft-type part with multiple stepped features and significant axial length compared to its diameter, the gear shaft requires precise measurement of axial dimensions to ensure quality control. Traditional measurement methods often fall short in terms of efficiency and adaptability, especially for a family of gear shaft variants with diverse sizes. Visual measurement technology, as a non-contact approach, offers a promising solution, but its stability is challenged by the intricate relationships among measurement behaviors, such as image acquisition parameters and processing algorithms. In this study, we address the complexity of visual measurement behavior relationships for gear shafts by developing an autonomous planning and analysis system. This system enables adaptive behavior adjustment through feedback mechanisms, ensuring measurement stability for axial dimensions based on image stitching techniques.

The autonomous planning and analysis system for gear shaft visual measurement is structured around a closed-loop framework that integrates sampling position planning, behavior scheme configuration, measurement execution, and influence analysis. We begin by defining the geometric information of the target gear shaft, such as its total length and step diameters, and use this data to plan the sampling positions required for image stitching. For instance, the number of sampling positions $N$ is calculated based on the shaft length $L$ and the camera’s field of view $W$, using the formula: $$N = \text{floor}(L / W) + 1 + k,$$ where $k$ is a correction factor to adjust for overlapping regions. The overlapping width $X_1$ between adjacent images is determined as: $$X_1 = \frac{N \cdot W – L – 2 \Delta X}{N – 1},$$ with $\Delta X$ representing the edge margin. This planning ensures that the gear shaft is fully covered by multiple images, which are then stitched together to measure axial dimensions like the total length and step distances. The sampling step size $a$ is derived as $a = W – X_1$, facilitating automated image acquisition along the axial direction.

To implement this system, we designed an experimental setup comprising an industrial camera, a telecentric lens, LED lighting sources, and motorized linear stages. The gear shaft is mounted between centers on a worktable, and the camera is positioned on a Z-axis stage attached to an X-axis stage, allowing precise movement along the shaft’s axis. This setup supports multi-position sampling, with the camera capturing overlapping images at each station. The lighting combination includes ring lights for surface texture illumination and backlights for enhanced contour contrast, which improves image quality for stitching and measurement. Below is the image of the experimental setup, illustrating the arrangement:

Behavior planning and analysis form the core of our approach, focusing on the autonomous configuration of image acquisition and processing parameters. For image acquisition, key behaviors include exposure time, gain, and light intensity, which significantly influence image features like corners and edges. We formulate behavior schemes as sets of variable parameters, such as exposure time intervals from 10,000 to 30,000 μs, and analyze their impact on measurement outcomes. Similarly, image processing behaviors involve preprocessing, feature point detection using operators like Harris, feature matching, and edge extraction. The Harris operator, for example, uses gradient smoothness and weight parameters to detect corners in overlapping regions, with the corner response $R$ calculated as: $$R = \det(M) – \alpha \cdot (\text{trace}(M))^2,$$ where $M$ is the gradient covariance matrix and $\alpha$ is a sensitivity parameter. By iterating through behavior combinations and evaluating measurement fluctuations, we optimize these parameters to minimize errors.

The workflow for axial dimension measurement behavior planning and analysis is divided into three phases. In Phase 1, sampling positions are planned based on the gear shaft’s design data and camera specifications. If the overlapping width $X_1$ is too small or large, we adjust $k$ and $\Delta X$ to achieve a balance between stitching reliability and efficiency. Phase 2 involves executing image acquisition with varying behavior schemes, such as different exposure times, and storing the image sets for processing. In Phase 3, we perform behavior influence analysis by processing the images through stitching and measurement steps. This includes defining regions of interest for feature matching based on planned corner coordinates, extracting edge points using caliper methods, and fitting lines to measure distances. The caliper method involves placing rectangular search areas along edges, with the number of calipers $C$ and their width $W_c$ affecting the extracted points. The distance $D$ between two edges is computed by fitting lines to the points and calculating the perpendicular distance, often using least-squares fitting: $$\min \sum (y_i – (mx_i + b))^2.$$ Through cyclic execution, we feedback measurement deviations to update behavior schemes, enabling continuous optimization.

In our experimental study, we applied this approach to a specific gear shaft model with axial dimensions including a total length of 162 mm and multiple step diameters. The sampling position planning resulted in three stations with an overlapping width of approximately 15 mm, derived from the formulas above. We conducted behavior influence analysis by varying exposure time, feature point extraction parameters, and edge extraction settings, as summarized in Table 1. Each behavior change was evaluated based on the measurement results for the gear shaft’s total length, with the goal of identifying optimal combinations that ensure stability within 0.01 mm.

Table 1: Experimental Tasks for Behavior Influence Analysis
Behavior Factor Variation Range Impact on Measurement
Exposure Time 10,000 to 30,000 μs, Δt=1,000 μs Significant fluctuations in stitched image quality
Feature Point Extraction (Harris) Smoothness: 0.1 to 1.0, Weight: 0.01 to 0.1 High sensitivity to corner detection accuracy
Edge Contour Extraction (Caliper) Number of calipers: 8 to 10, Width: 5 to 10 pixels Moderate influence on edge point consistency

The influence of exposure time on the gear shaft axial dimension measurement is profound, as shown in Figure 1 (note: figure reference is avoided as per instructions). For exposure times below 15,000 μs, the measurement deviation exceeded 0.1 mm, while between 16,000 and 29,000 μs, it reduced to around 0.05 mm. Specifically, at 16,000, 19,000, 23,000, and 25,000 μs, the measurements were closest to the actual value, with 25,000 μs exhibiting the least fluctuation. This underscores the necessity of optimizing exposure time for reliable gear shaft measurement. The relationship can be modeled as a function $M(t) = \beta \cdot t + \gamma$, where $M$ is the measured length and $t$ is exposure time, but non-linear effects dominate due to image noise and stitching artifacts.

Next, we examined feature point extraction behaviors using the Harris operator. By varying gradient smoothness and weight matrix values, we observed substantial changes in the stitched image accuracy. For instance, with a smoothness of 0.6 and weight of 0.02, the measurement stability improved across multiple exposure times, as detailed in Table 2. The optimal parameters minimized the reprojection error in feature matching, which is critical for precise alignment of gear shaft images. The error $E$ can be expressed as: $$E = \sum \| x_i’ – H x_i \|^2,$$ where $H$ is the homography matrix and $x_i$, $x_i’$ are corresponding points in overlapping regions.

Table 2: Measurement Results under Different Feature Extraction Parameters
Smoothness Weight Measured Length (mm) Deviation (mm)
0.4 0.01 161.95 0.05
0.6 0.02 162.00 0.00
0.8 0.05 162.03 0.03

Edge contour extraction using the caliper method also showed notable effects on measurement consistency. We tested different numbers of calipers (8, 9, 10) and widths (5 to 10 pixels) on images captured at optimized exposure times. For example, with 8 calipers and a width of 10 pixels, the measurement variance was minimized to under 0.01 mm, as the fitted edge lines became more stable. The number of edge points $P$ extracted per caliper affects the line fitting error, with $P$ proportional to the caliper width $W_c$ and the local contrast $C$: $$P \propto W_c \cdot C.$$ This relationship highlights the importance of adaptive caliper settings for different regions of the gear shaft, such as steps and shoulders.

To validate the adaptability of the optimized behavior scheme, we applied it to three additional gear shafts of the same model. The results, summarized in Table 3, demonstrate that the preferred parameters—exposure time of 25,000 μs, smoothness of 0.6, weight of 0.02, and caliper settings of 8 count and 10-pixel width—yielded consistent measurements with deviations within 0.01 mm. This confirms the robustness of the autonomous planning system for gear shaft families. Moreover, we extended the measurement to other axial dimensions, such as step diameters, and compared the visual measurement results with those from vernier calipers. The visual method not only provided higher precision but also enabled simultaneous multi-dimension measurement, enhancing efficiency for mass production.

Table 3: Comparison of Visual Measurement Results for Multiple Gear Shafts
Gear Shaft ID Measured Length (mm) Deviation (mm) Optimal Exposure Time (μs)
1 162.002 0.002 25,000
2 161.999 0.001 25,000
3 162.001 0.001 25,000

In conclusion, our autonomous planning and analysis system for gear shaft axial dimension visual measurement effectively addresses the complexities of behavior relationships. By integrating sampling position planning, behavior scheme iteration, and influence feedback, we achieve stable measurements with fluctuations controlled within 0.01 mm. The key factors—exposure time, feature point extraction, and edge contour extraction—are optimized through systematic analysis, ensuring reliability across different gear shaft variants. This approach not only improves measurement accuracy but also supports automated quality control in industrial applications, paving the way for broader adoption of visual inspection technologies. Future work could explore deep learning-based behavior adaptation for even greater autonomy in handling diverse gear shaft geometries.

Scroll to Top