CISSP Development Methodologies – Bk1D8T1St1P7

Artificial Intelligence

Artificial intelligence (AI) is a broad topic covering automated capabilities of Learning, Development, Problem-solving, reasoning, interpreting complex data, game playing, speech recognition, social intelligence, and perception.

These are some of the areas of information systems security in which AI has been used:

  • Incident response
  • Cognitive algorithms to recognize malware behaviors
  • Automated defense based on autonomous learning security systems
  • Assistive technology
AI and Security

AI is not without security concerns. Common security concerns include social engineering, the unpredictability of AI, and enhanced malware. Each of these is explored next.

AI and Social Engineering

One example of AI social engineering is the Microsoft chatbot “Tay.” Tay is a machine-learning Twitter social intelligence bot that was designed to adapt its conversational style based on social training. Within 24 hours of public deployment, Tay was trained by repetitive interaction with abusive users’ Tweets to spit back hate speech, effectively becoming a troll based on modeled behavior. This form of abuse might be considered a form of “social engineering” of an AI program.

As more socially intelligent machine entities enter widespread use, this form of attack will be a weapon in the arsenal of cyber-psychological warfare. Such choice of vocabulary was obviously not the intention of Tay’s designers. Although this was not a direct security issue with Tay, the features of AI were exploited by users. To confront the threat of social engineering to AI-based social programs, the issues of unpredictability and enhanced malware need to be addressed.

Securing socially intelligent learning agents requires developing and following a plan that will lead toward the general end state behavioral outcomes of the agent. This plan breaks from a pure learning experience, but given the example of what happened to the intelligent agent Tay, it is a necessary and worthwhile constraint.

Determine the agent’s behavioral a socially intelligent learning agent secure development management plan includes the following steps:

  1. Development of training set oriented toward behavioral outcomes.
  2. Determine the agent’s behavioral outcomes.
  3. Train the agent on the curated training set to seed the initial language dictionary and behaviors.
  4. Development of blacklist of languages and language patterns to block the agent’s expo- sure to the blacklisted items.
  5. Development of training set to train the agent on the proper handling of the blacklisted languages and language pattern.
  6. Development of testing protocol to verify the agent’s behavioral outcomes and properly handle blacklisted languages and language pattern.
  7. Evaluate the agent via the testing protocol.
  8. Based on test assessment outcomes, return to steps 1–7 to adjust the agent’s behavior to align it with the desired outcome.
  9. Upon deployment, continuously monitor the agent’s social interaction to detect and block blacklisted languages and language patterns from being exposed to the agent’s learning.
  10. Routinely evaluate the agent based on the testing protocol, revisiting the guidance of step 8.
Unpredictability

Because of the complexities in AI’s logic and learning, the outcome of an AI-based system may not be predictable. It is tough to determine what it will do. When such AI-based systems are in charge of systems whose malfunctions would have a severe human impact, such unpredictability can exact an inconceivably high cost.

Controlling the behaviors of a system that is unpredictable is challenging. Classically, the best options for controlling unpredictable systems are to set up guidelines of accept- able behavior, to monitor for behaviors that occur outside of these guidelines, and then to take corrective action upon such detection.

This can be implemented by developing a supervisory control layer that evaluates  all of the AI-based system’s outputs and selectively inhibits, prohibits, or modifies the AI system’s outputs from causing undesirable behaviors. Think of this as being similar to a firewall for the commands coming out of the AI system. This supervisory control layer must operate independently of the AI system in order to have executive control over it, and never vice versa. A fail-safe or fail-secure approach, depending upon the nature and criticality of the system, is necessary to secure AI-based systems from unpredictable out- comes of the performance of the system.

Enhanced Malware

Just as AI systems can be used to learn and adapt to malware threats, they can be used to enhance the threat potential of malware. “Intelligent” malware could manifest a threat to systems and society in a myriad of ways. Socially enhanced malware could impersonate trusted colleagues or friends to obtain sensitive information or lead individuals into situations of peril. Intelligent malware agents could coordinate reconnaissance and attack efforts to target high-value or high-impact assets. These high-impact targets could be critical infrastructure, hospital equipment, or corporate assets.

Expert Systems

An expert system is a form of AI whose purpose is to model a human expert’s decision-making process. Expert systems have a knowledge base of facts and rules of an information domain. To make “decisions,” an expert system uses an inference engine to apply rules to known facts in the knowledge base to arrive at new facts. Expert systems focus on the “what,” not the “how.”

Popular expert system platforms are:

Expert systems are arguably the first successful form of AI having many applications.

They have been used in medical diagnosis, genetic engineering, credit authorization, fraud detection, chemical analysis, and more. Some examples of how expert systems are used for security are in risk management decision-making, cybersecurity attack detection and intervention, and vulnerability scanning.

Expert systems are well suited to assist in risk management decision-making.

Although empirical methods are employed, the line on practical project risk management ends with human expertise. Ultimately a human expert makes risk management decisions based on an assessment of facts, their knowledge and experience, guidelines, education, training, and best guesses. When this knowledge is used to create an expert system, it can reveal risks faster than a human could.

Expert systems can also be used against cyber attacks. An expert system’s rules and fact base can know many different types of attacks. An expert system can detect the symptoms of a cyber attack. By comparing the symptoms and attack features against its rules and fact base, it can aid in identifying the specific attack and present countermeasures   for the attack. Identifying attacks is difficult. Thus, an expert system can help identify and resolve cyber attacks.

An expert system can be helpful with vulnerability scanning when its knowledge base has detailed knowledge and rules about the IT environment and its vulnerabilities.

Neural Networks

An artificial neural network is a computing system based on biological neural networks. These systems can “learn” by exposure to examples. Neural nets have the following learning paradigms: supervised learning, like being taught by a teacher, and unsupervised learning, where they are just fed data.

Perhaps the most significant security vulnerability of neural networks is how they can be spoofed. Neural networks trained by supervised learning methods to recognize specific patterns and environmental features are particularly susceptible to spoofing. Because of the specificity of their training and how these neural networks use their pattern recognition algorithms to identify the objects they are trained to recognize, small changes imposed on these objects can change how the neural network identifies them.

To understand how this is a security issue, imagine how a neural network may be used by autonomous vehicles to recognize control signs in its environment. Neural networks trained like this could be used to recognize and respond to the features of an unknown environment. A stop sign would be an example of such a feature. Encountering a stop sign is normally not a big deal. It is easy for a human to recognize. However, change the face of the stop sign, by vandalism or possibly by the application of a few stickers added to its face, and the neural network interprets the defaced stop sign differently.

A stop sign changed to any other type of sign is implicitly a risk in many ways and poses a potentially serious risk to life and safety.

Mobile Apps and Their Ecosystem

A mobile application is a type of application software designed to run on a mobile device, such as a smartphone or tablet computer development. Mobile apps generally run in a sandbox environment on the mobile device. Two common mobile ecosystems are Apple’s iOS mobile operating system and Android. Regardless of a mobile device, it is important to secure it. Two ways to do this are at the code level and at the human behavioral level.

iPhone and Apple’s iOS Mobile Operating System

The iPhone is a line of smartphones designed and marketed by Apple that was first released on June 29, 2007. They run Apple’s iOS mobile operating system. The iPhone was an advancement in mobile computing development. It standardized the touchscreen keyboard. It merged a phone with a digital media player. It has an Internet browser. It has a vibrant ecosystem of applications (“apps”) that run on the phone.

Android

Android is a mobile operating system developed by Google. Android is based on a modified Linux kernel and other open-source software. Android was designed mainly for touchscreen mobile devices such as smartphones and tablets. Android was first released commercially in September 2008. Ongoing Android development is by Google and the Open Handset Alliance.

Android has a security model that supports an ecosystem of applications and devices built based on the Android platform. Android has multilayered security designed to sup- port an open platform while still protecting all users of the platform.

Mobile Code

Mobile code is any program, application, or content capable of movement while embedded in an email, document, or website. Some examples of mobile code include Java Applets, client-side scripts (JavaScript, VBScript), ActiveX controls, dynamic email, viruses, Trojan horses, and worms. Mobile code is often termed executable content, remote code, and active capsules.

There are two categories of mobile code security development: attacks against the remote host on which the program is executed, as with malicious applets or ActiveX programs, and attacks due to the subversion of the mobile code and its data by the remote execution environment. Preventing each type of attack requires protecting system resources and data. Controlling mobile code becomes an issue of enforcing isolation controls around the component.

A sandbox is one of the most effective means of controlling mobile code. It is a protected and restrictive runtime environment for mobile code. Sandbox restrictions include limitations on networking, memory, system access, and storage. Sandboxes often grant access control explicitly. Much of a sandbox’s security is in configuring to restrict mobile code’s access to system resources, and it is often a situation controlled by the sandbox’s system administrator.

Securing Mobile Devices

There are several recommended practices to protect mobile devices from being hacked and to improve their security. It is important to ensure that all members of an organization understand and embrace the following principles:

  • Be careful on public Wi-Fi network.
  • Don’t compromise security, which includes not jailbreaking your device.
  • Be prepared to track and lock your device.
  • Delete messages from people you don’t know.
  • Do not open random emails or links.
  • Practice safe browsing.
  • Keep software updated.
  • Be careful of what you install.
  • Conceal or disable lock screen notification.
  • Avoid public charges.