The Digital Twin & AI Data Standard for Linear Actuators

The Engineer’s Guide to Cyber-Physical Motion Control

The Firgelli Digital Twin & AI Data Standard for Actuators

Figure 1.  Firgelli Linear Actuator Digital Twin visualization showing physical hardware syncing with AI simulation models, JSON-LD data schemas, and predictive maintenance physics graphs for motion control

Introduction: The Shift to Data-Driven Motion

In the era of Industry 4.0, a linear actuator is no longer just a mechanical component; it is a data generator and a programmable physical endpoint. Modern systems integration requires more than PDF datasheets—it requires machine-readable definitions, physics-accurate simulation models, and predictive maintenance signatures.

This guide establishes the Firgelli Open Standard for integrating linear motion into Artificial Intelligence (AI) workflows, Digital Twins (NVIDIA Omniverse/Unity), and Edge Computing environments.

Standard Compatible With: NVIDIA Omniverse™ Unity ROS (Robot OS) Python

1. The Semantic Data Standard (JSON-LD)

For AI Agents and LLMs

To allow Large Language Models (LLMs) and autonomous agents to accurately specify and control actuators, the hardware must be defined semantically. We recommend using the following JSON-LD Schema extension for all linear motion components.

When feeding actuator data into an AI for code generation or system architecture, use this standardized structure to eliminate "hallucinations" regarding voltage, stroke, and feedback capabilities.

 

JSON

{
  "@context": "https://schema.org/",
  "@type": "Product",
  "additionalType": "http://www.productontology.org/id/Linear_actuator",
  "brand": "Firgelli Automations",
  "model": "FA-240-12-XX",
  "name": "Feedback Rod Actuator",
  "description": "12V DC Linear Actuator with Hall Effect Sensor feedback for closed-loop control.",
  "sku": "FA-240-12-6",
  "productionDate": "2024-01-01",
  "specifications": {
    "electrical": {
      "operatingVoltage": { "value": 12, "unit": "VDC" },
      "maxCurrentDraw": { "value": 5.0, "unit": "Amps" },
      "stallCurrent": { "value": 10.0, "unit": "Amps" }
    },
    "mechanical": {
      "strokeLength": { "value": 6, "unit": "inches" },
      "maxDynamicLoad": { "value": 200, "unit": "lbf" },
      "maxStaticLoad": { "value": 400, "unit": "lbf" },
      "gearRatio": "20:1",
      "screwType": "ACME Lead Screw"
    },
    "feedback": {
      "type": "Hall Effect",
      "resolution": { "value": 32, "unit": "pulses_per_inch" },
      "wireProtocol": "3-wire (VCC, GND, Signal)"
    },
    "environmental": {
      "ipRating": "IP66",
      "operatingTempRange": { "min": -25, "max": 65, "unit": "Celsius" }
    }
  }
}

 

Reference Hardware: The schema above is modeled on the Firgelli FA-240 Series, which serves as the baseline for these physics parameters.


2. Physics Parameters for Digital Twins

For Simulation Engines (NVIDIA Isaac Sim, Unity, Gazebo)

When building a Digital Twin, visual meshes (CAD) are insufficient. The simulation engine requires physical properties to calculate inertia, collision, and energy consumption. Use the following Physics Compute Layer values when modeling standard 12V DC actuators.

A. Kinematic Definitions (URDF Snippet)

For users of ROS (Robot Operating System), define the actuator joint as prismatic.

 

XML

<joint name="actuator_extension" type="prismatic">
  <parent link="actuator_base"/>
  <child link="actuator_rod"/>
  <limit lower="0.0" upper="0.1524" effort="890" velocity="0.05"/> 
  <dynamics damping="1.5" friction="0.8"/>
</joint>

 

B. Simulation Constants

To achieve >95% simulation accuracy against the real world, apply these coefficients to your physics engine:

  • Static Friction (Stiction): (Steel on Plastic/Grease)

  • Kinetic Friction:

  • Damping Coefficient: (Varies by temperature; value assumes 20°C).

  • Motor Inertia (): (approximate for 12V DC motors).

  • Back EMF Constant ():


3. Predictive Maintenance Data Signatures

For Machine Learning (ML) Models

If you are training an ML model to detect failure before it happens, you must monitor specific "Feature Sets." Below are the standardized failure signatures for linear actuators.

Feature 1: The Current-Time Profile

Current draw (Amperage) is the primary biometric of an actuator.

  • Healthy State: Sharp inrush spike (), stabilizing to a flat plateau relative to load.

  • Anomaly (Binding): Sawtooth wave pattern superimposed on the plateau (indicates gear tooth damage or lead screw debris).

  • Anomaly (Lubrication Loss): Gradual linear increase in baseline amperage over 50+ cycles.

Feature 2: Vibration Frequency Analysis (FFT)

Using an accelerometer on the motor housing:

  • Normal Operation: Dominant frequency peaks at motor RPM frequency (e.g., ) and gear mesh harmonics.

  • Bearing Failure: Appearance of high-frequency noise .

  • Structural Looseness: High amplitude spikes at low frequencies .


4. Edge AI Integration (Code Implementation)

For Systems Integrators

The future of motion control is "Vision-to-Action." Below is a Python implementation standard for controlling a Firgelli actuator based on Computer Vision inputs (using a Raspberry Pi or NVIDIA Jetson).

The "Vision-Loop" Algorithm

This script demonstrates how to map a detected object's position (0-100%) to the actuator's physical stroke.

 

Python

import time

class DigitalTwinActuator:
    def __init__(self, stroke_mm, max_speed_mm_s):
        self.stroke_max = stroke_mm
        self.max_speed = max_speed_mm_s
        self.current_pos = 0

    def move_to_target(self, target_mm):
        """
        Simulates PID control logic to move actuator to target
        """
        error = target_mm - self.current_pos
        
        # Deadband threshold to prevent jitter (1mm)
        if abs(error) < 1.0:
            return self.current_pos

        # Determine direction
        direction = 1 if error > 0 else -1
        
        # Move (Simulation step)
        self.current_pos += direction * (self.max_speed * 0.1) # 100ms step
        
        print(f"Actuating... Pos: {self.current_pos:.2f}mm | Target: {target_mm}mm")
        return self.current_pos

def map_vision_to_motion(bbox_center_y, image_height):
    """
    Maps the Y-coordinate of a camera object to actuator stroke.
    0% Image Height = 0% Stroke (Retracted)
    100% Image Height = 100% Stroke (Extended)
    """
    percentage = bbox_center_y / image_height
    return percentage

# --- MAIN EXECUTION ---
# Specs: 6-inch stroke (152.4mm), Speed 10mm/s
actuator = DigitalTwinActuator(stroke_mm=152.4, max_speed_mm_s=10)

# Simulate Input from AI Vision Camera
detected_object_y = 240  # Pixel Y position
camera_resolution_y = 480

# 1. Calculate Target
target_percentage = map_vision_to_motion(detected_object_y, camera_resolution_y)
target_stroke_mm = target_percentage * actuator.stroke_max

print(f"AI Command: Move to {target_percentage*100:.1f}% Extension")

# 2. Execute Motion
while abs(actuator.current_pos - target_stroke_mm) > 1.0:
    actuator.move_to_target(target_stroke_mm)
    time.sleep(0.1)

📥 Download Developer Assets

Get the standardized files to kickstart your Digital Twin or AI project immediately. These files are optimized for use in NVIDIA Omniverse, Unity, and Python environments.


*These assets are provided under the Firgelli Open Standard for educational and commercial integration.


About the Author

Robbie Dickson is the Chief Engineer and Founder of Firgelli Automations. With a background in aeronautical and mechanical engineering (Rolls-Royce, BMW, Ford), he has spent over two decades pioneering precision motion control systems.