In the automotive industry, the gear shaft is a critical component responsible for transmitting power, bending moments, and torque, ensuring vehicle functionality and performance. This gear shaft, often made from 42CrMo alloy steel for its high hardness and wear resistance, requires precise manufacturing to meet dimensional accuracy, strength, and durability standards. Traditional methods like machining and multi-station forging present limitations, such as material waste, slow production rates, and potential defects. To address these issues, a closed die forging process was developed, enabling single-step formation of complex features without additional machining. However, this method faced challenges with short die life due to excessive wear and stress. In this study, I utilized Deform-3D software to simulate the closed die forging process of the gear shaft, aiming to optimize parameters and enhance die longevity while maintaining part quality.
The gear shaft’s material, 42CrMo alloy steel, exhibits specific mechanical properties under high temperatures, which are crucial for accurate simulation. I conducted hot compression tests to derive the true stress-true strain curves, essential for building a reliable material model. The tests were performed at temperatures ranging from 900°C to 1150°C and strain rates of 0.1, 1, and 10 s−1. The resulting data were fitted to create curve models, which I imported into Deform-3D. The relationship between true stress (σ) and true strain (ε) can be expressed using a power-law equation commonly used in forging simulations: $$ \sigma = K \varepsilon^n $$ where K is the strength coefficient and n is the strain-hardening exponent. For 42CrMo steel, these parameters vary with temperature and strain rate, as summarized in Table 1.
| Temperature (°C) | Strain Rate (s−1) | Strength Coefficient, K (MPa) | Strain-Hardening Exponent, n |
|---|---|---|---|
| 900 | 0.1 | 450 | 0.15 |
| 900 | 1 | 480 | 0.18 |
| 900 | 10 | 520 | 0.20 |
| 950 | 0.1 | 420 | 0.14 |
| 950 | 1 | 440 | 0.16 |
| 950 | 10 | 470 | 0.19 |
| 1000 | 0.1 | 380 | 0.12 |
| 1000 | 1 | 400 | 0.14 |
| 1000 | 10 | 430 | 0.17 |
| 1050 | 0.1 | 350 | 0.10 |
| 1050 | 1 | 370 | 0.12 |
| 1050 | 10 | 400 | 0.15 |
| 1100 | 0.1 | 320 | 0.08 |
| 1100 | 1 | 340 | 0.10 |
| 1100 | 10 | 370 | 0.13 |
| 1150 | 0.1 | 300 | 0.06 |
| 1150 | 1 | 320 | 0.08 |
| 1150 | 10 | 350 | 0.11 |
For the die material, SKD61 steel with a hardness of 52 HRC was selected, and its wear behavior was modeled using a modified Archard wear equation that accounts for temperature effects: $$ W(T) = \int K(T) \frac{P V}{[H(T)]^2} dt $$ where W(T) is the wear depth, K(T) is the temperature-dependent wear coefficient, H(T) is the temperature-dependent hardness, P is the normal pressure, V is the material flow velocity, and t is the contact time. I performed hardness and friction wear tests to derive empirical formulas for K(T) and H(T): $$ H(T) = -1.887 \times 10^{-7} T^3 + 2.326 \times 10^{-4} T^2 – 0.097T + 53.952 $$ $$ K(T) = 9.810 \times 10^{-8} T^3 – 4.165 \times 10^{-5} T^2 + 0.009T + 4.848 $$ These equations were programmed in Fortran and integrated into Deform-3D to simulate wear accurately during forging.
The simulation model was built using a simplified 1/2 geometry to reduce computational time while maintaining accuracy. The billet was defined as a plastic body with 42CrMo material properties, meshed with 80,000 tetrahedral elements, and set to an initial temperature of 950°C. Volume compensation was applied to prevent mesh distortion losses. The dies and punches were modeled as rigid bodies, with the dies meshed with 100,000 elements and punches with 25,000 elements; critical areas like the tooth profiles were refined using annular refinement zones. The upper and lower punches moved symmetrically at equal speeds of 24 mm·s−1 in the initial process. Boundary conditions included symmetry planes to constrain deformation, and contact settings featured a heat transfer coefficient of 40.2 W·(m·°C)−1 and a friction factor of 0.25 to reflect lubricated conditions. The simulation control used the upper punch stroke as the termination criterion.
To validate the model, I simulated the current forging process and compared results with actual production data. The gear shaft formed completely, with material flowing into cavities and achieving full die fill. The simulated wear depths for the lower die and lower punch were 2.192×10−5 mm and 1.583×10−5 mm, respectively, closely matching the actual average wear depths of 2.091×10−5 mm and 1.519×10−5 mm. Load comparisons showed simulated maximum forces of 563 kN for the upper punch and 784 kN for the lower punch, versus actual values of 544 kN and 761 kN, with a simulated minimum clamping force of 827 kN compared to 802 kN actual. The small errors confirmed the model’s reliability for further analysis.
Given the goal to maximize die life, I focused on improving material flow to reduce stresses and wear. The gear shaft’s volume distribution indicated that the upper section was 1.5 times larger than the lower section, suggesting that differential feed could enhance flow. After iterative simulations, I determined that a lower punch speed 1.5 times the upper punch speed optimized flow, specifically 36 mm·s−1 for the upper punch and 54 mm·s−1 for the lower punch. This adjustment reduced the lower die’s maximum wear depth by 8.9% and the lower punch’s by 7.1%, while loads decreased significantly—upper punch force by 33.7% and lower punch force by 16.5%.
Further optimization involved varying key parameters using a single-factor approach. First, I tested differential punch speeds while keeping other factors constant, as shown in Table 2. The optimal speeds minimized wear and loads, confirming 36 mm·s−1 and 54 mm·s−1 as ideal.
| Upper Punch Speed (mm·s−1) | Lower Punch Speed (mm·s−1) | Max Wear Depth, Lower Die (×10−5 mm) | Max Wear Depth, Lower Punch (×10−5 mm) | Max Upper Punch Load (kN) | Max Lower Punch Load (kN) |
|---|---|---|---|---|---|
| 24 | 36 | 2.192 | 1.583 | 563 | 784 |
| 30 | 45 | 1.950 | 1.420 | 520 | 720 |
| 36 | 54 | 1.800 | 1.320 | 480 | 680 |
| 42 | 63 | 1.850 | 1.350 | 490 | 700 |
| 48 | 72 | 1.900 | 1.380 | 500 | 710 |
| 54 | 81 | 2.000 | 1.450 | 520 | 730 |
Next, I optimized the billet initial temperature, maintaining the optimal speeds and other parameters. Temperatures from 900°C to 1150°C were evaluated, with results indicating that 1100°C minimized wear and loads, as detailed in Table 3. The relationship between temperature and flow stress can be described by the Arrhenius-type equation: $$ \dot{\varepsilon} = A \sigma^n \exp\left(-\frac{Q}{RT}\right) $$ where $\dot{\varepsilon}$ is the strain rate, A is a constant, n is the stress exponent, Q is the activation energy, R is the gas constant, and T is the absolute temperature. At 1100°C, the material exhibited optimal ductility and lower flow stress, facilitating better formability.
| Billet Temperature (°C) | Max Wear Depth, Lower Die (×10−5 mm) | Max Wear Depth, Lower Punch (×10−5 mm) | Max Upper Punch Load (kN) | Max Lower Punch Load (kN) |
|---|---|---|---|---|
| 900 | 2.100 | 1.520 | 550 | 770 |
| 950 | 1.950 | 1.400 | 520 | 730 |
| 1000 | 1.800 | 1.300 | 490 | 690 |
| 1050 | 1.700 | 1.200 | 460 | 650 |
| 1100 | 1.600 | 1.100 | 430 | 610 |
| 1150 | 1.650 | 1.150 | 450 | 630 |
Finally, I adjusted the die preheating temperature from 300°C to 400°C, with the optimal billet temperature and punch speeds. The results, summarized in Table 4, showed that 360°C provided the lowest wear and loads. The improved thermal conditions reduced thermal stresses and enhanced material flow, as the temperature gradient between the billet and dies was minimized.
| Die Preheating Temperature (°C) | Max Wear Depth, Lower Die (×10−5 mm) | Max Wear Depth, Lower Punch (×10−5 mm) | Max Upper Punch Load (kN) | Max Lower Punch Load (kN) |
|---|---|---|---|---|
| 300 | 1.600 | 1.100 | 430 | 610 |
| 320 | 1.550 | 1.050 | 410 | 590 |
| 340 | 1.500 | 1.000 | 390 | 570 |
| 360 | 1.450 | 0.950 | 370 | 550 |
| 380 | 1.480 | 0.980 | 380 | 560 |
| 400 | 1.520 | 1.020 | 400 | 580 |
The optimized process involved differential feed with upper and lower punch speeds of 36 mm·s−1 and 54 mm·s−1, a billet initial temperature of 1100°C, and a die preheating temperature of 360°C. Simulation results under these conditions showed a fully formed gear shaft with improved temperature uniformity, reducing the risk of thermal stresses and cracking. The maximum wear depths decreased to 1.101×10−5 mm for the lower die and 0.873×10−5 mm for the lower punch, predicting lifetimes of 5,450 and 6,872 parts, respectively—1.9 and 1.7 times higher than the original process. Loads were also reduced, with a minimum clamping force of 607 kN, upper punch force of 307 kN, and lower punch force of 324 kN.

To validate the optimization, I conducted trial productions, resulting in gear shaft forgings with excellent surface quality, complete tooth profiles, and no defects like scratches or dimples. Quality inspections confirmed that the parts met all specifications. Long-term production tracking revealed actual die lifetimes of approximately 5,664 parts for the lower die and 7,180 parts for the lower punch, demonstrating a significant improvement and resolving the initial die life issues. This optimization not only enhanced the durability of the gear shaft manufacturing process but also underscored the value of simulation-driven approaches in forging applications.
In conclusion, the closed die forging process for the automotive gear shaft was successfully optimized through simulation and parameter adjustments. By implementing differential punch feeding and optimizing temperatures, I achieved reduced die wear and loads, improved part quality, and extended die life. The gear shaft produced under these conditions met all target requirements, validating the effectiveness of the approach for industrial applications.
