Fresh juice

2023-12-05

Stevens Institute for Artificial Intelligence looks at prospects for AI and robotics

October's executive order governing artificial intelligence pleased AI experts like Dr. Brendan Englot, who called the balanced standards "encouraging" amidst limited oversight. As director of Stevens Institute for AI, Englot believes reasonable guardrails can steer innovation responsibly on emerging technologies like generative AI.

Spanning cybersecurity, ethics and national security, the directive establishes certification regimes for commercial AI through agencies like NIST while promoting development. It instructs robust red team testing to harden systems pre-deployment and avoid potential downsides of uncontrolled progress.

The order also creates an AI safety council across infrastructure sectors, aiming to eliminate threats from biased algorithms or adversarial attacks. Additional privacy guidelines seek preventing abuse of personal data.

Englot called such common-sense regulation "reflective of diverse viewpoints", benefiting public trust in AI by resolving disputes over generated content while avoiding stifling advancement. The administration likewise stressed streamlining innovation through voluntary principles over burdensome restrictions hampering competitiveness.

At Stevens Institute, research already targets healthcare improvements via smarter decision-making systems. Interdisciplinary studies apply AI and machine learning to augment human judgment where needed rather than replace it outright.

Having focused his early robotics career on enhanced environmental perception, Englot understands the risks from ever-more autonomous systems interacting with people. But he believes gradual collaboration on safety-centric standards across stakeholders offers the most prudent way forward.

The emerging accord around accountable development through proactive policy seeks to match rapid software advances reshaping society. While AI will march on regardless, responsible guidance on reducing potential downsides offers the chance to maximize benefits for all if executed judiciously.

While AI advances draw hype, researchers caution unbridled commercialization risks limiting technological progress. OpenAI CEO shakeups typify murky paths translating innovations into business models, says Stevens Institute's Dr. Brendan Englot. Marketability obsession may overlook incremental R&D.

Englot argues robots first proved capabilities before finding commercial roles, unlike automated driving's struggles meeting expectations despite dominating private investment. The urgency around monetizing AI threatens similar distortion if perfecting performance gives way to promotion.

Generative models also display weaknesses with inaccuracy or hallucinated responses unacceptable in specialized domains like medicine where mistakes carry consequences. While entertaining chatbots permit blunders, mission-critical applications require contextually-attuned training under expert guidance.

Combining big models with physics simulations and human domain knowledge in iterative loops will unlock problem-solving abilities on par with human consultants, Englot projects. But the hype outpacing methodical technical refinement risks disillusionment, funding lapses or misguided constraints that shackle progress.

The key resides in deliberate collaboration between researchers, developers and policymakers balancing innovation with responsible stewardship. Rather than rushing sales before solutions fully mature, using market forces judiciously to incentivize incremental advances presents a more prudent path.

If focuses stray too far toward profitability, undue pressure threatens the patient nurturing of beneficial capabilities. But considered commercial inputs can also lend competitive sparks accelerating realization of AI’s promise. Finding the right equilibrium remains key to smooth tech transfers benefiting society.

While AI safety remains crucial, experts believe appropriate collaboration will responsibly accelerate discoveries where robotics intersects with generative models. Within five years, Dr. Brendan Englot expects dramatic gains in design and decision-making augmenting human capacities. Yet translating digital breakthroughs into safe embodied systems lags - presenting obstacles requiring methodical navigation.

Englot notes copilot coding tools currently assist engineers in software tasks. Soon 3D printers may leverage AI to swiftly model prototype parts. But risk assessments must precede full integration with powerful actuators and autonomy. Generative risks today center on data; realized harms require additional buffers.

Nevertheless, Englot stays bullish on robotics benefiting from scaled general models. Various efforts replicate mobility successes of pioneers like Boston Dynamics by optimizing dynamics through simulation. Testing rigor then ensures safety standards keep pace with functionality improvements.

DARPA's subterranean robot challenge stretched platform versatility, hinting at potential. Stevens Institute also conducts marine energy research needing reliable seafloor manipulation. The building blocks are demonstrating uncanny resilience.

With sight, sound and touch rapidly advancing, AI looks poised to unlock intuitive environments allowing fluid human-robot collaboration. But developers must thoughtfully assess hazards in leveraging still-unproven technologies for mission-critical realms.

By proactively aligning innovators, academics and regulators around sensible constraints anticipating adverse use, progress can quicken transparently and cooperatively. With care and creativity, a golden age synthesizing artificial and natural intelligence may bloom responsibly - elevating society toward aspirational goals through diligent co-creation.

Share with friends:

Write and read comments can only authorized users