IMPENDING ATTACKS ON ADVANCED PRODUCTS 2018-06-04T20:24:17+00:00
driverless-vehicle
pilotless-aircraft
domestic-robots
nano-medical-devices
artificial-intelligence
genetically-coded-products
other-advanced-products

DRIVERLESS VEHICLES

driverless-vehicle
pilotless-aircraft
domestic-robot-filled
nano-medical-devices
artificial-intelligence
genetically-coded-products
other-advanced-products

From a product liability defense lawyer’s point of view, the following are among the decisions that will be made by designers and manufacturers that will most often be unjustifiably attacked by plaintiffs’ trial lawyers:

01

The decisions regarding perceived data intake goals e.g. the objects, events, and conditions that are chosen to be perceived by the vehicle’s perception devices;

02

The decisions regarding the vehicle’s perception radius limits;

03

The decisions regarding the array of perception devices provided with the vehicle, taking into consideration decisions one and two above, and taking into consideration the inherent capabilities and limitations of each type of perception device or system under consideration;

04

The decisions regarding the “perceptions synthesis” system, if any, to be used to give the controller a clear picture of the conditions that exist inside and outside the vehicle within the vehicle’s radius of perception;

05

The decisions regarding what decision-making framework, including the moral judgment framework (e.g. algorithmic coding input, deep learning processes utilizing GPU or other systems, and including observation and replication artificial intelligence processes, replication of human physiologic neuronal and brain activity processes, and other artificial intelligence processes) is to be incorporated within the vehicle’s control system in order to determine the best actions to take under the perceived conditions;

06

The decisions regarding appropriate information to provide to the vehicle’s human driver regarding the functioning of the vehicle’s autonomous systems and any potential risks arising from use of such systems, and regarding any appropriate warnings to provide the human driver regarding how to avoid any such risks, including any needed appropriate warnings regarding system disengagement by the human driver;

07

The decisions regarding whether the autonomous vehicle’s controller should be designed to self-disengage under certain conditions.

There will likely be attacks launched by plaintiffs’ trial lawyers relating to the appropriateness of all seven of these decisions and others made by the manufacturers, but it is our belief, influenced by decades of experience in product liability battlefields around the country, that manufacturers of autonomous vehicles will see the greatest focus (particularly in early years) from Texas plaintiffs’ trial lawyers on attacking decisions 2, 3, 5, and 6 above.

Such plaintiffs’ trial lawyers will argue that the vehicle’s chosen perception radius is too limited, that the array of perception devices chosen has insufficient capabilities, that the controller made the wrong action choices in light of the perceived conditions, and that the human driver was given too little or too much information.

One of Markland Hanley’s founders, Dale Markland, has written an extensive paper on the subject of the likely attacks that will be made by plaintiffs’ trial lawyers relative to autonomous ground vehicles, which includes an exhaustive discussion of appropriate defensive responses in such cases under Texas law.

READ MORE

This paper would also be useful in the defense of product liability cases governed by Texas product liability law relating to products other than autonomous ground vehicles, and particularly those relating to other types of autonomous products.  If you are an in-house representative of a product manufacturer involved in the design or manufacture of autonomous products or other commercially produced products and desire a copy of such paper, please contact Dale Markland at dmarkland@marklandhanley.com or 214-665- 9480.

REQUEST ACCESS TO THE PAPER

The paper includes a thorough analysis of a hypothetical incident or case example which Mr. Markland utilizes to predict probable plaintiffs’ trial lawyers’ attacks and to suggest appropriate available defensive responses. A simplified version of the hypothetical is set forth below.

  • An autonomous vehicle is moving north with the driver behind the wheel, but with the control function at this point in time in the hands of the autonomous vehicle’s algorithmic based controller.
  • Another vehicle is traveling in the same direction and same lane just ahead of the autonomous vehicle.
  • The autonomous vehicle’s perception devices sense the vehicle directly in front of the autonomous vehicle.
  • A previously undetected malfunction in the autonomous vehicle’s acceleration system (caused by a local mechanic’s poor repair work) makes it clear to the autonomous vehicle’s algorithmic based controller that action must be taken to avoid impact with the rear of the vehicle traveling in front of the autonomous vehicle.
  • The autonomous vehicle’s perception devices also pick up a third vehicle traveling abreast of the autonomous vehicle to the right of it, and moving in the same direction as the autonomous vehicle is moving, and at approximately the same speed as the autonomous vehicle is moving.
  • There is southbound traffic going in the opposite direction, but it is beyond the perception radius limit of the autonomous vehicle’s perception devices.
  • The autonomous vehicle’s controller, relying on the algorithmic programmed-in best responses, causes the autonomous vehicle to slow down to a speed below that of the car to its right, and to steer right, with the intention of missing the vehicle to the right to its rear, and moving through an area where the autonomous vehicle’s perception devices within their existing radiuses of perception do not pick up any present hazards.
  • The autonomous vehicle’s perception devices work perfectly. Its judgment to slow and move to the right behind the right-side vehicle and into open areas seems sound in light of what its perception devices picked up within their radiuses of perception. The actions that the autonomous vehicle took are exactly as it was commanded to take by the controller.
  • The actions dictated by the controller lead to multiple deaths.

How this occurred, what the likely plaintiffs’ trial lawyers’ attacks would be, and what the appropriate defensive responses would be under Texas product liability law are detailed in the paper. The discussion in the paper includes comments regarding types of perception devices in use and their capabilities, perception radius issues, including issues arising from potential blind spots, decision-making framework issues, and issues relating to the instructions and warnings provided with the product.

The most prevalent unjustified attacks which plaintiffs’ trial lawyers will launch against driverless vehicle manufacturers will likely be related to:

  • The decisions made by the designer or manufacturer regarding perception radius limits;
  • The decisions made regarding the array of perception devices provided and their capabilities and limitations;
  • The decisions made regarding the decision-making frameworks, including moral judgment frameworks provided;
  • The decisions made regarding information and warnings provided to the human driver.

The principal targets of plaintiffs’ trial lawyers will be the vehicle manufacturers which may have been involved in any or all of these decisions and, relative to the decision-making framework design defect claims, the targets will also be the designers or manufacturers that provide or assist in providing the decision-making criteria, including the moral judgments that are set into or determined by the vehicles’ controllers.

Designers and manufacturers must begin to prepare their defensive responses to these unjustified attacks now, and they must engage a team of autonomous product defenders. Those defenders must certainly be experienced trial lawyers, but perhaps as importantly, they must be trial lawyers who focus time and energy on the types of products and technical design and marketing issues described above, and on what can best be done now to respond to the upcoming unjustified attacks. Such a team must also include appropriate experts, a subject also discussed in Dale Markland’s paper discussed above.

Although pilotless commercial airliners are certainly a thing of the future, autonomous drones and autonomous commuter hovercraft may be seen more closely on the horizon.  The designers and manufacturers of such pilotless aircraft will face unique challenges when preparing for the unjustified attacks that will most likely be launched by plaintiffs’ trial lawyers.

From a product liability defense lawyer’s point of view, the following are among the decisions that will be made by designers and manufacturers of pilotless aircraft that will most often be unjustifiably attacked by plaintiffs’ trial lawyers:

01

The decisions regarding perceived data intake goals e.g. the objects, events and conditions that are chosen to be perceived by the aircraft’s perception devices;

02

The decisions regarding the aircraft’s perception radius limits;

03

The decisions regarding the array of perception devices provided with the aircraft, taking into consideration decisions one and two above, and taking into consideration the inherent capabilities and limitations of each type of perception device or system under consideration;

04

The decisions regarding the “perceptions synthesis” system, if any, to be used to give the controller a clear picture of the conditions that exist inside and outside of the aircraft within the aircraft’s radius of perception;

05

The decisions regarding what decision-making framework, including moral judgment framework (e.g. algorithmic coding input, deep learning processes utilizing GPU or other systems, and including observation and replication artificial intelligence processes, replication of human physiologic neuronal and brain activity processes, and other artificial intelligence processes) is to be incorporated within the aircraft’s control system in order to determine the best actions to take under the perceived conditions;

06

The decisions regarding appropriate information to provide to the hovercraft’s human pilot or the drone’s owner, regarding the functioning of the aircraft’s autonomous systems and any potential risks arising from use of such systems, and regarding any appropriate warnings to provide to the human pilot or owner regarding how to avoid any such risks, including any needed appropriate warnings regarding system disengagement by the human pilot or owner; and

07

The decisions regarding whether the pilotless aircraft’s controller should be designed to self-disengage under certain conditions.

Plaintiffs’ trial lawyers will likely launch attacks relating to the appropriateness of all seven of these decisions and others made by the manufacturers, but it is our belief, influenced by decades of experience in product liability battlefields around the country, that manufacturers of pilotless aircraft will see the greatest focus (particularly in early years) from plaintiffs’ trial lawyers on attacking decisions 2, 3, 5, and 6 above.  Such plaintiffs’ trial lawyers will argue that the aircraft’s chosen perception radius is too limited, that the array of perception devices chosen has insufficient capabilities, that the controller made the wrong action choices in light of the perceived conditions, and that the human pilot or owner of the aircraft was given too little or too much information.

One of Markland Hanley’s founders, Dale Markland, has written an extensive paper on the subject of the likely attacks that plaintiffs’ trial lawyers will make relative to autonomous ground vehicles, including an exhaustive discussion of appropriate defensive responses in such cases under Texas law. This paper would also be useful in the defense of product liability cases governed by Texas product liability law relating to products other than autonomous ground vehicles, particularly pilotless aircraft. If you are is an in-house representative of a product manufacturer involved in the design or manufacture of autonomous products or other commercially produced products and would like to receive a copy of the paper, please contact Dale Markland at dmarkland@marklandhanley.com or 214-665-9480.

REQUEST ACCESS TO THE PAPER

Consider the following hypothetical incident case example related to an autonomous drone:

  • An autonomous or self-operating drone’s perception devices identify a commercial airliner in its path within the drone’s perception radius limits, coming straight at the drone.
  • The drone’s algorithmic controller makes the decision to quickly dive and veer significantly left to miss the commercial airliner and eliminate the possibility of a head-on crash, and potential damage to the drone and airliner and potential injuries to the humans in the airliner.
  • The drone’s seemingly appropriately programmed action sets off a chain of events that leads to multiple serious injuries, destruction of the drone, and damage to property on the ground.

The type of event chain of reactions that can occur with pilotless aircraft can be analogized to that which can occur with an autonomous ground vehicle.  An example of such an event on the ground is discussed at length in Dale Markland’s paper referred to above which can be obtained by contacting Mr. Markland. The most likely attacks by plaintiffs’ trial lawyers in the pilotless aircraft arena are generally the same as in the case of a driverless ground vehicle, those being attacks aimed at decisions made relative to the aircraft’s perception radius limits, relative to other capabilities of the perception devices utilized, relative to judgments designed into the controller’s decision-making algorithms or other decision-making frameworks, and relative to warnings and instructions given to the pilotless aircraft’s human co-pilot (in the case of a hovercraft) or to the owner or operator of the pilotless aircraft (in the case of a drone) as to those elements. Appropriate defensive responses to such attacks under Texas product liability law are discussed in Dale Markland’s paper.

The principle targets of plaintiffs’ trial lawyers in such cases will likely be the pilotless aircrafts’ designers and manufacturers who may have been involved in any or all of the noted decisions. Relative to the decision-making/moral judgment design defect claims, the targets will also be the designers or manufacturers that provide or assist in providing the decision-making criteria, including the moral judgments that are programmed into or determined by the pilotless aircrafts’ controllers.

Designers and manufacturers must begin to prepare their defensive responses to these unjustified attacks now, and they must engage a team of autonomous product defenders.  Those defenders must certainly be experienced trial lawyers, but perhaps as importantly, they must be trial lawyers who focus time and energy on the types of products and technical design issues described above, and on what can best be done now to respond to the upcoming unjustified attacks. Such a team must also include appropriate experts, a subject also included in the paper authored by Dale Markland discussed above.

Autonomous industrial and domestic robots, including autonomous elderly care robots and household cleaning robots, will be unjustifiably attacked by plaintiffs’ trial lawyers under many of the same theories as discussed on this website relative to driverless vehicles and pilotless aircraft.

As with driverless vehicles and pilotless aircraft, we note that manufacturers of robots should prepare now for the unjustified attacks that will likely be launched against them.

From a product liability defense lawyer’s point of view, the following are among the decisions that will be made by designers and manufacturers of robots, that will most often be unjustifiably attacked by plaintiffs’ trial lawyers:

01

The decisions regarding perceived data intake goals e.g. the objects, events and conditions that are chosen to be perceived by the robot’s perception devices;

02

The decisions regarding the robot’s perception radius limits;

03

The decisions regarding the array of perception devices provided with the robot, taking into consideration decisions one and two above, and taking into consideration the inherent capabilities and limitations of each type of perception device or system under consideration;

04

The decisions regarding the “perceptions synthesis” system, if any, to be used to give the controller a clear picture of the conditions that exist inside and outside of the robot within the robot’s radius of perception;

05

The decisions regarding what decision-making framework, including moral judgment framework (e.g. algorithmic coding input, deep learning processes utilizing GPU or other systems, and including observation and replication artificial intelligence processes, replication of human physiologic neuronal and brain activity processes, and other artificial intelligence processes) is to be incorporated within the robot’s control system in order to determine the best actions to take under the perceived conditions;

06

The decisions regarding appropriate information to provide to the robot’s owner regarding the functioning of the robot’s autonomous systems and any potential risks arising from use of such systems, and regarding any appropriate warnings to provide to the owner regarding how to avoid any such risks, including any appropriate warnings regarding system disengagement by the owner; and

07

The decisions regarding whether the robot’s controller should be designed to self-disengage under certain conditions.

There will likely be attacks launched by plaintiffs’ trial lawyers relating to the appropriateness of all seven of these decisions and others made by the manufacturers, but it is our belief, influenced by decades of experience in the product liability battlefields around the country, that manufacturers of robots will see the greatest focus (particularly in early years) from plaintiffs’ trial lawyers on attacking decisions 2, 3, 5, and 6 above.  Such plaintiffs’ trial lawyers will argue that the robot’s chosen perception radius is too limited, that the array of perception devices chosen has insufficient capabilities, that the controller made the wrong action choices in light of the perceived conditions, and that the human owner was given too little or too much information.

One of Markland Hanley’s founders, Dale Markland, has written an extensive paper on the subject of the likely attacks that will be made by plaintiffs’ trial lawyers relative to autonomous ground vehicles, which includes an exhaustive discussion of the appropriate defensive responses in such cases under Texas law.  This paper will also be useful in the defense of product liability cases governed by Texas product liability law relating to robots. If you are is an in-house representative of a product manufacturer involved in the design or manufacture of autonomous products or other commercially produced products, and you would like to receive a copy of the paper, please contact Dale Markland at dmarkland@marklandhanley.com or 214-665-9480.

REQUEST ACCESS TO THE PAPER

Some autonomous and partially autonomous robots e.g. robots designed to care for and provide companionship for elderly persons, raise an additional technical element which is not of as great of significance in the driverless vehicle or pilotless aircraft arenas as it will likely be in the robot arena-the robot’s ability to put the elderly person at ease and provide that person with a feeling of being cared for.  Although the task of designing a robot to be a caring, attentive or loving entity, or one that a human can relate to, is a daunting task, it is unlikely that such issues will be as predominant in product liability personal injury cases, relative to the potential for physical injury or property damage, as opposed to an emotional injury, as are the perception radius limits and capabilities issues, and the controller decision-making issues.  A rude, uncaring robot may be something to avoid, but at least the emotional injury recovery may have some legal limits already in place, and others that will be put in place. It is not, however, entirely farfetched to suggest that actions may be filed on behalf of allegedly emotionally damaged elderly humans alleging that a robot’s bad disposition lead to a plethora of emotional and psychological conditions.

Setting aside such claims of emotional injury from uncaring robots, let us take a more solid hypothetical personal injury case scenario.  It goes as follows:

  • The elderly care robot is assisting the elderly human in bathing in a stand-up bathtub.
  • The elderly human’s granddaughter is playing in the hall outside of the bathroom.
  • The robot, being designed principally to care for the human at close range e.g. helping with bathing, dentures, eating, and dressing, has been designed with it an overall 360-degree radius of perception out to eight feet, and utilizes smart cameras to receive the perception data. It has no radar, lidar, or sonar perception devices.
  • As the robot continues to assist the human in the standup bathtub, the bath water, which is set to a relatively high temperature at the request of the human, begins to generate significant steam in the bathroom. This obscures the robot’s cameras’ vision.
  • The human suffers a medical issue and her head lowers to below the bathtub’s water level.
  • The robot’s cameras, unable to perceive through the steam in the bathroom and the water in the bathtub, provide limited data regarding the position of the human from which the robot’s controller/decision-making framework can make a decision as to proper action.
  • The robot’s controller determines that the best decision/action is to open the door to the standup bathtub to investigate further the human’s location and condition.
  • This opening of the bathtub door leads to a flood of water onto the robot and bathroom floor.
  • The woman in the tub slumps down in the tub, continuing her medical decline.
  • The robot’s programmed reaction when it is hit with the torrent of water is to move rearward to get out of the water. In so doing, the robot backs out through the bathroom door and into the hall.
  • These actions result in multiple injuries.

The type of event chain of reactions that can occur with domestic robots can be analogized to those that can occur with an autonomous ground vehicle. An example of such an event chain of reactions on the ground, is discussed at length in Dale Markland’s paper referred to above, and can be obtained by contacting Mr. Markland. The most likely unjustified attacks by plaintiffs’ trial lawyers in the domestic robot arena are generally the same as in the case of a driverless ground vehicle, those being attacks aimed at decisions made relative to the robot’s perception radius limits, relative to other capabilities of the perception devices utilized, relative to judgments designed into the controller’s decision-making algorithms or other decision-making frameworks, and relative to warnings and instructions given to the robot’s human owner.

Appropriate defensive responses to such attacks under Texas product liability law are discussed in Mr. Markland’s paper.

The principle targets of plaintiffs’ trial lawyers will likely be the robot designers and manufacturers who may have been involved in any or all of the mentioned decisions. Relative to the decision-making framework design defect claims, the targets will also be the designers or manufacturers that provide or assist in providing the decision-making criteria, including the moral value judgments that are programmed into or determined by the robots’ controllers.

Designers and manufacturers must begin to prepare their defensive responses to these unjustified attacks now, and they must engage a team of autonomous product defenders. Those defenders must certainly be experienced trial lawyers, but perhaps as importantly, they must be trial lawyers who focus time and energy on the types of products and technical design and marketing issues described above, and on what can best be done now to respond to the upcoming unjustified attacks. Such a team must also include appropriate experts, a subject also included in the paper authored by Dale Markland discussed above.

Nano-medical devices will include nano or miniaturized devices that are placed in the human body through injection, swallowing, inspiration or other means, that autonomously or partially autonomously clear plaque from blood vessels or other bodily parts, deliver cancer fighting drugs to the location of the cancer cells and deposit such drugs at that precise location, and those that release radiation therapy at tumor sites. The Food and Drug Administration approved the first digital pill, Abilify MyCite, which tracks if patients have taken their medication in November of 2017.

Nano or miniaturized medical devices that incorporate autonomous systems present decisional issues similar to those presented to manufacturers of other autonomous products and those decisions will likely lead to unjustified attacks by plaintiffs’ trial lawyers on such decisions. These will likely include attacks by plaintiffs’ trial lawyers on the following decisions:

01

The decisions regarding perceived data intake goals e.g. the objects, events and conditions that are chosen to be perceived by the nano-medical device’s perception devices;

02

The decisions regarding the nano-medical device’s perception radius limits;

03

The decisions regarding the array of perception devices provided with the nano-medical device, taking into consideration decisions 1 and 2 above, and taking into consideration the inherent capabilities and limitations of each type of perception device or system under consideration;

04

The decisions regarding the “perception synthesis” system, if any, to be used to give the controller a clear picture of the conditions that exist inside and outside of the nano-medical device within the nano-medical device’s radius of perception;

05

The decisions regarding what decision-making framework, including the moral judgment framework (e.g. algorithmic coding input, deep learning processes utilizing GPU or other systems, and including observation and replication artificial intelligence processes, replication of human physiologic neuronal and brain activity processes and other artificial intelligence processes) is to be incorporated within the nano-medical device’s control system in order to determine the best actions to take under the perceived conditions;

06

The decisions regarding appropriate information to provide to the administering physician regarding the functioning of the nano-medical device’s autonomous systems, and any potential risks arising from use of such systems, and regarding any appropriate warnings to provide to the administering physician regarding how to avoid any such risks, including any needed warnings regarding system disengagement by the administering physician;

07

The decisions regarding whether the nano-medical device’s controller should be designed to self-disengage under certain conditions.

As is the case of other types of autonomous products described on this website, some of the most likely unjustified attacks that will be launched by plaintiffs’ trial lawyers against manufacturers will relate to decisions made relative to the set perception radius, relative to the other capabilities of the perception devices, relative to the decision-making framework, including the moral judgements designed into the controller, and relative to the warnings and instructions given by the manufacturer to the administering physician.

In this particular arena of autonomous nano-medical devices, the capabilities of the perception devices to properly assess the product’s environment will likely be unjustifiably attacked. The sophistication of differentiating between plaque and vessel, between normal essential human organ cell tissues and cancer cells, or between the myriad of other potentially perceived objects confronting the perception devices will be quite remarkable.

The design of the controller’s decision-making algorithm or other decision-making process will also likely be attacked by plaintiffs’ trial lawyers e.g. if the tissue is difficult to interpret as cancerous tissue, or not, and if the anti-cancer drug is thus released on healthy human organ tissue, plaintiffs’ trial lawyers will attack.

In this product liability arena, the issue of adequacy of warnings and instructions will, under Texas law and the law of many jurisdictions, be judged by a different standard than in the other arenas discussed in this website. The issue will be whether the warnings and instructions given were adequate not for the patient, but rather for the sophisticated physician who is to administer the treatment. The fate of product liability cases will often turn on the language used on labels, on compliance with FDA and other government requirements, and in some instances on the preemption defense.

Let us take a hypothetical incident case example:

  • The nano-medical device delivers its cancer attacking material at the tumor site, but some of this material is delivered to adjacent healthy organ tissue, let us say on the larynx.
  • After treatment, the patient’s cancer is clear, but the patient has lost significant vocal function due to the delivery of the cancer-attacking material on the healthy larynx tissue.

This type of case hypothetical in the nano-medical device arena will set up many of the same types of unjustified attacks by plaintiffs’ trial lawyers as are made relative to other types of autonomous products. Such attacks will be on the designers’ or manufacturers’ decisions made relative to the perception radius and device capabilities of chosen perception devices, the decisions, including the moral judgments being made by the controller, and the sufficiency and appropriateness of information provided to the administering physician. The assertion that an automatic shut-off of the device’s delivery of its cancer-killing drug is needed if the circumstances are unclear relative to appropriate action, could be one of the plaintiffs’ trial lawyers’ attacks.

One of Markland Hanley’s founders, Dale Markland, has written an extensive paper on the subject of the likely attacks that will be made by plaintiffs’ trial lawyers relative to autonomous ground vehicles, which includes an exhaustive discussion of appropriate defensive responses to such attacks under Texas law. This paper will also be useful to the manufacturer of autonomous or partially autonomous nano-medical devices. If you are an in-house representative of a product manufacturer involved in the design or manufacture of autonomous products or other commercially provided products and desire a copy of such paper, please contact Dale Markland at dmarkland@marklandhanley.com or 214-665-9480.

REQUEST ACCESS TO THE PAPER

The principle targets of plaintiffs’ trial lawyers will be the nano-medical device designers and manufacturers who may have been involved in any or all of the mentioned decisions, and relative to the decision-making design defect claims, the targets will also be the designers and manufacturers that provide or assist in providing the decision-making framework, including the moral judgments that are set into or determined by the nano-medical devices’ controllers.

Designers and manufacturers must begin to prepare their defensive responses to these unjustified attacks now, and they must engage a team of autonomous product defenders. Those defenders must certainly be experienced trial lawyers, but perhaps as importantly, they must be trial lawyers who focus time and energy on the types of products and technical design and marketing issues described above, and on what can best be done now to respond to these unjustified attacks. Such a team must also include appropriate experts, a subject also included in the referred to paper authored by Dale Markland.

Manufacturers of products that incorporate true creative artificial intelligence or hard artificial intelligence, such as products utilizing IBM’S Watson system, will confront some of the most important and weighty technical and legal issues of our time in product liability cases.

From a product liability defense lawyer’s point of view, the following are the most likely decisions made by the designers and manufacturers of artificial intelligence products that plaintiffs’ trial lawyers will unjustifiably attack in product liability cases:

01

The decisions regarding perceived data intake goals e.g. the objects, events, and conditions that are chosen to be perceived by the artificial intelligence entity’s perception devices;

02

The decisions regarding the artificial intelligence entity’s perception radius limits;

03

The decisions regarding the array of perception devices provided with the artificial intelligence entity, taking into consideration decisions 1 and 2 above, and taking into consideration the inherent capabilities and limitations of each type of perception device or system under consideration;

04

The decisions regarding the “perceptions synthesis” system, if any, to be used to give the artificial intelligence entity a clear picture of the conditions that exist within the entity’s radius of perception;

05

The decisions regarding what decision-making framework, including moral judgment framework (e.g. deep learning processes utilizing GPU or other systems, and including observation and replication artificial intelligence processes, replication of human physiologic neuronal and brain activity processes, and other artificial intelligence processes) is to be incorporated within the entity’s controller system in order to determine the best actions to take under the perceived conditions;

06

The decisions regarding appropriate information to provide to the entity’s human owner regarding the functioning of the entity’s autonomous systems and any potential risks arising from use of such systems, and regarding any appropriate warnings to provide to the owner regarding how to avoid any such risks, including any appropriate warnings regarding system disengagement by the owner;

07

The decisions regarding whether the entity’s controller should be designed to self-disengage under certain conditions.

Plaintiffs’ trial lawyers will attack the appropriateness of all 7 of these decisions and others made by the manufacturers, but it is our belief, influenced by decades of experience in product liability battlefields around the country, that manufacturers of artificial intelligence entities will see the greatest focus (particularly in early years) from plaintiffs’ trial lawyers on attacking decisions 2, 3, 5, and 6 above. Such plaintiffs’ trial lawyers will argue that the artificial intelligence entity’s chosen perception radius is too limited, that the array of perception devices chosen has insufficient capabilities, that the controller made the wrong action choices in light of the perceived conditions, and that the owner was given too little or too much information.

One of Markland Hanley’s founders, Dale Markland, has written an extensive paper on the subject of the likely attacks that will be made by plaintiffs’ trial lawyers relative to autonomous ground vehicles, which includes an exhaustive discussion of appropriate defensive responses in such cases under Texas law. This paper will also be useful in the defense of product liability cases governed by Texas product liability law relating to products other than autonomous ground vehicles, and particularly those relating to other types of autonomous products such as artificial intelligence.

If you are an in-house representative of a product manufacturer involved in the design or manufacture of autonomous products or other commercially produced products and desire a copy of such paper please contact Dale Markland at dmarkland@marklandhanley.com or 214-665-9480.

REQUEST ACCESS TO THE PAPER

Markland Hanley lawyers are presently putting together a realistic hypothetical case arising from use of an artificial intelligence entity and leading to personal injury. It is anticipated that at a future date, at least a simplified version of the chosen hypothetical case will be presented on this website page.

The principal targets of plaintiffs’ trial lawyers will be the artificial intelligence entities’ designers and manufacturers which may have been involved in any or all of these decisions, and relative to the decision-making/moral judgement design defects claims, the targets will also be the designers or manufacturers that provide or assist in providing the decision-making framework, including the moral judgments that are determined by the artificial intelligence entities.

Designers and manufacturers of artificial intelligence entities must begin their responses to the impending attacks now, and engage a team of autonomous product defenders. Those defenders must certainly be experienced trial lawyers, but perhaps as importantly, they must be trial lawyers who focus time and energy on the types of products and technical design and marketing issues described above, and who focus on what can best be done now to respond to the anticipated unjustified attacks. Such a team must also include appropriate experts, a subject also included in the referred to paper by Dale Markland.

Some of the most significant activity arising from genetic modifications is within the genetically modified food products arena, including that activity involving products intended for animal consumption and for direct human consumption, and within the arena of genetically created, modified or enhanced pharmaceuticals.

Obviously, there are vast differences between genetically modified products and other types of autonomous products discussed on this website that rely on man-coded algorithms or artificial intelligence for their decision-making frameworks. Genetically modified products do not “make decisions.” The “decisions” are naturally coded in the genes. There are, however, some overlapping attacks that plaintiffs’ trial lawyers will make on genetically modified products that will also be made relative to the other autonomous products discussed on this website, and there will be a great deal of overlap between such product groups relative to available defensive approaches in product liability cases.

Plaintiffs’ trial lawyers, and their paid experts will attack virtually all key decisions made by the manufacturers of genetically modified products, particularly decisions made relative to the design and marketing of any genetically modified food product used either for direct consumption by humans or for indirect consumption by humans through consumption of animal food sources. Such plaintiffs’ trial lawyers and their paid experts will also attack pharmaceutical manufacturers relative to every biogenetically engineered pharmaceutical product, no matter what benefits such products provide to humanity.

One of Markland Hanley’s founders, Dale Markland, has written an extensive paper on the subject of the likely attacks that will be made by plaintiffs’ trial lawyers in product liability cases relative to autonomous ground vehicles, including an exhaustive discussion of appropriate defensive responses to be made in such cases under Texas law. This paper would also be useful in the defense of product liability cases governed by Texas product liability law relating to products other than autonomous ground vehicles, including those cases relating to genetically modified products and advanced pharmaceutical products. If you are an in-house representative of a product manufacturer involved in the design or manufacture of autonomous products, or other commercially produced products, including genetically modified products and advanced pharmaceuticals, and desire a copy of such paper, please contact Dale Markland at dmarkland@marklandhanley.com or at 214-665-9480.

REQUEST ACCESS TO THE PAPER

Markland Hanley will in the near future supplement this website page to deal with specific attacks that will likely be made by plaintiffs’ trial lawyers on genetically modified products and advanced pharmaceuticals.

Markland Hanley has defended hundreds of product liability cases, including crashworthiness cases, post-collision fuel fed fire cases, and cases involving allegations of defective design, manufacture, and/or marketing against manufacturers of automobiles, heavy trucks, buses, tractors, rollover protection systems, pharmaceuticals, farming and industrial equipment and components, plastics and chemical products, tires, HVAC systems, oil field equipment, and vehicle components, including seatbelts, brakes, and fuel and electrical systems, among others.

Markland Hanley’s founders also have significant experience in the following practice areas:

  • Complex commercial litigation, including business torts and breach of contract
  • General personal injury litigation
  • Defamation litigation
  • Litigation involving injuries to business and personal reputation, including internet reputation injury and product disparagement litigation
  • Discrimination litigation
  • Sexual harassment litigation
  • Representation of artists and companies in the video game and motion picture industries, entertainers, and advertising industry artists
  • Computer, computer chip, and other high technology litigation
  • Toxic torts
  • Environmental injury litigation
  • Childcare, daycare and other care facility liability

Markland Hanley’s founders have particularly extensive experience defending truck manufacturers in product liability cases.

We have a national practice and bring a sophisticated approach and decades of experience to defending our clients, including major manufacturers in wrongful death and catastrophic personal injury and other product liability cases, in high stakes cases brought by well-financed plaintiff attorneys in jurisdictions that are customarily unfriendly to business. We use our unparalleled knowledge of the law, our intensive preparation and persuasive skills, and, in products cases, our deep understanding of the technical and scientific principles applicable to the products we defend, to advocate powerfully and cost-effectively both in the courtroom and at the bargaining table.

The firm’s experience and accomplishments of the firm’s members include the following:

Pharmaceutical, Toxic Tort and Mass Tort
Product Liability Trial Experience
Product Liability Non-Trial Experience
Other Significant Experience

Markland Hanley is uniquely qualified to quickly and efficiently assemble a team of trial lawyers, paralegals, investigators and experts to powerfully and effectively advocate on your behalf. We tailor our advocacy to your needs and bring innovative ideas and time-tested strategies to the courtroom.

For further information about our practice or if we can be of assistance, please call or email Dale Markland at 214-665- 9480/dmarkland@marklandhanley.com or Tara Hanley at 214-665-9479/thanley@marklandhanley.com