
What Is Artificial Intelligence?
Artificial intelligence is the simulation of human intelligence in computers and machines via technology. It can learn, solve problems, and make decisions, and some advanced systems can perform tasks faster and on a larger scale than people can.Types of AI
“AI” is actually a broad term that covers a wide range of technologies and approaches that mimic human intelligence and decision-making processes. There are many ways to divide AI further, such as:- Capabilities: The level of intelligence and task specificity
- Learning Approaches: How the systems acquire knowledge and improve performance
- Methods: The techniques and algorithms used to process information and make decisions
Based on Capabilities
Narrow AI (ANI)
Ex: Spam filters, recommendation systems
General AI (AGI)
Ex: Theoretical human-level AI (not yet achieved)
Super AI (ASI)
Ex: Hypothetical AI surpassing human intelligence
Based on Learning Approaches
Supervised Learning
Ex: Image classification, spam detection
Unsupervised Learning
Ex: Customer segmentation, anomaly detection
Reinforcement Learning
Ex: Game AI, robotic motion control
Semi-Supervised Learning
Ex: Text classification with partial labeling
Based on Methods
Machine Learning
Ex: Predictive analytics, pattern recognition
Natural Language Processing
Ex: Chatbots, language translation
Computer Vision
Ex: Facial recognition, object detection
Robotics
Ex: Autonomous vehicles, industrial robots
Expert Systems
Ex: Medical diagnosis tools, financial planning systems
These varied approaches allow AI to tackle a wide range of challenges across industries. For instance, AI is capable of analyzing reams of data to diagnose or predict illnesses, assess the risk of financial investments, suggest the optimal time to plant or harvest crops, and more. Beyond providing game-changing insights, it can also automate repetitive or unfulfilling tasks, freeing people to do more important or creative work.
How Does AI Work?
A basic explanation is that AI uses mathematical models called algorithms to process huge volumes of data. As it processes this data, AI learns from the patterns and relationships in the information. AI uses various methods, including statistical techniques and physics, to “learn” without additional programming. Computers can be trained using supervised learning, where computers are fed labeled data sets with a predefined expected output. In machine learning, AI relies on neural networks, structures that mimic the human brain. Similar to neurons, interconnected units process information and relay it to each other to find connections and meaning in data. These networks can even learn from mistakes. A more complicated version of this is deep learning, which involves huge, layered neural networks that make multiple passes at data, extracting progressively deeper insights and connections.Cybersecurity Risks of AI
While AI already brings many benefits to organizations and individuals, there are valid concerns about its vulnerabilities to hacking, attacks, and misuse. And many of these risks arise from AI’s growing ability to blur the line between human and machine interactions. An IBM report pointed out that “cybercriminals are increasingly logging in rather than hacking into networks through valid accounts.” And, the FBI has warned that cybercriminals are increasingly using AI tools to conduct sophisticated phishing, social engineering, and voice/video cloning scams. These AI-driven tactics often work by enhancing the effectiveness of existing schemes by increasing the speed, scale, or automation of cyber-attacks. This allows for very convincing, personalized deceptions. As a result, both individuals and businesses face heightened risks of data theft, financial losses, and reputational damage. Because many of these attacks are through legitimate access points, such as a log-in page, traditional cybersecurity tactics like antivirus software and firewalls may prove to be insufficient against increasingly sophisticated methods of infiltration.Examples of AI-Based Cybersecurity Risks
Here are some other examples that illustrate how AI can be exploited or misused in cybersecurity contexts.Adversarial AI Attacks
Cybercriminals can use machine learning to exploit vulnerabilities or introduce malicious inputs to gain access to systems. Beyond access, inserting malicious inputs into datasets can affect how the AI assesses and learns from the data, leading to incorrect or misleading outputs. This is also known as data poisoning.
Botnets
AI-powered bots can plan and coordinate large-scale operations such as distributed denial-of-service (DDOS) attacks. They can also learn and adapt quickly, making them harder to stop.
Model Theft and Inversion
The models that AI systems use are often proprietary, making them valuable intellectual property. Attackers may seek to infiltrate systems to steal or manipulate the models. In model inversion, cybercriminals may be able to use outputs to reverse-engineer or reconstruct private information used in training and processing.
Deepfakes
AI can create videos or audio recordings that look and sound realistic, which criminals can use for activities like fraud, blackmail, identity theft, or misinformation.
Autonomous Weaponization
Because AIs can learn, criminals can use them to create systems that can attack other systems with no need for human oversight or intervention.
Data Privacy
AIs use massive amounts of data that can be sensitive or private, especially in sectors like healthcare and military intelligence. This potentially makes a data breach of an AI system incredibly damaging and even life-threatening.
Ensuring Secure AI Development
Technology leaders must ensure that any AI systems developed or implemented by their organization protects all data — both proprietary data about processes and products and confidential information related to employees, vendors, and customers. AI models can be trained to identify sensitive data and use measures, such as encryption, to protect it. AIs can also be trained to recognize behaviors that could indicate misuse, such as unexpected or large downloads of information or other unusual and unexpected user behavior. AI’s rapid developments has made it difficult for lawmakers and industry organizations to keep up. However, there are already various regulations and principles in place as we continue to explore this new frontier.Industry Frameworks and Guidelines
Major cybersecurity organizations have already created frameworks to help IT leaders design and implement AI systems to ensure data security and integrity. The United Kingdom’s National Cyber Security Centre developed the Guidelines for Secure AI System Development with the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and agencies from 17 other countries. This document focuses on the overall security of systems divided into four major areas: design, development, deployment, and operations and maintenance. The NCSC also published a set of machine learning principles to provide high-level contextual information about the secure development, usage, and application of AI systems, making this an excellent resource for leaders and other decision-makers in non-technical roles. The U.S. Government Accountability Office developed an AI Accountability Framework for federal agencies as well as to help congressional legislators understand and address the security concerns and complexities of artificial intelligence. Among other things, it spells out some critical practices around governance, data, performance, and monitoring. The National Institute of Standards and Technology (NIST) offers an AI Risk Management Framework (AI RMF) focused on generative AI, which can help organizational leaders decide how best to manage the risks of the technology in a way that aligns with their objectives, industry regulations and best practices, and other external and internal factors. Many other frameworks have been developed in partnership with world governments and key cybersecurity agencies. These include the Quadrilateral Security Dialogue, UNESCO’s AI Ethics Agreement, and the Global Partnership for Artificial Intelligence.Legislation
Beyond voluntary frameworks, several countries have acted to enshrine AI security and ethics into law. In 2023, President Biden announced an executive order with specific provisions to ensure the safety and security of AI systems. Various bills regarding AI security are also up before Congress and state legislatures. The European Union recently passed the AI Act, which codifies strict requirements for model management. Other governments around the world are also considering legislation regarding AI governance.Organizational Policies and Culture
Laws and frameworks are only as effective as the people who abide by them. Nurturing a culture of AI security through organizational processes, policies, and communication is vital for any organization seeking to implement AI tools. Leaders can accomplish this by prioritizing AI security and data governance. They can ensure safe and ethical AI adoption by requiring their organization to follow all applicable laws and selecting a framework for how AI systems are designed, implemented, and used. Since this area of technology is evolving so rapidly, a hands-on approach is required to monitor internal procedures as well as industry developments and guidance. In addition, greater transparency about AI security can nurture trust and credibility. More visibility around algorithms, data usage, and security measures can help consumers and clients feel more confident about AI systems and their privacy.Case Studies: AI Done Right
More sectors are leveraging AI to make their operations more efficient, better serve customers, and improve employee productivity.Retail
Among other tasks, Walmart uses AI for inventory management. This allows stores to stock and deliver items according to predicted demand for more reliable product availability, increased customer satisfaction, and reduced costs related to storage and over-ordering. Walmart has also used AI to store customers’ preferred brands and provide an AI voice assistant for store employees.Agriculture
CropX is an AI platform that monitors soil health, which farmers can use to inform their crop management. The company’s solutions have resulted in significant reductions in water and fertilizer usage as well as an increase in crop yields.Healthcare
Hospitals and other healthcare providers are under increasing pressure to cut costs and increase efficiency while maintaining high standards of patient care. Many use AI-driven predictive analytics tools, like GE Healthcare’s Command Center, to plan for patient volume, order supplies and equipment, and schedule staff.Finance
Bank of America, Capital One, and Wells Fargo have all debuted AI-driven chat assistants AI-driven assistants that can talk to or text with customers to handle routine transactions and inquiries. Some institutions also want to use AI to analyze customer data and provide personalized banking recommendations and budgeting guidance. However, a Citi survey reveals that the primary use of AI in finance is internal, with most respondents using the tool to generate content that enhances employee productivity.Case Studies: Where AI Can Go Wrong
Despite these successes, some organizations have experienced notable failures in deploying AI:Airlines
Air Canada had to pay damages to a customer after its AI-driven chatbot gave him incorrect information about a bereavement discount. The airline refused to honor the discount, claiming it wasn’t responsible for the chatbot’s error. A tribunal disagreed.Healthcare
In one notorious healthcare example, an AI tool was being trained to detect skin cancer using photos of malignant lesions. Upon more careful investigation of the training data more, researchers found that the model was associating the presence of a ruler in photos—often used by clinicians to measure lesions—with malignancy, rather than the size or shape of the lesions themselves.Criminal Justice
Many criminal justice advocates have expressed concern about bias and inaccuracy in AI tools and systems when it comes to suspect identification. In Detroit, a facial recognition tool surveyed a video of a robbery and carjacking and offered up a photo of a woman from a database as a probable suspect. The woman was arrested and charged even though she was visibly eight months pregnant, unlike the woman in the surveillance video and victim’s description.Media Companies
Media sites are grappling with the use of AI in their content, occasionally leading to unintended consequences. The Guardian used a generative AI tool from Microsoft to craft polls to appear alongside online articles. However, it received criticism and anger from readers when the tool generated a poll asking readers to speculate on the cause of a young woman’s death next to an article about the tragedy.Leading AI Into the Future
Artificial intelligence is a cornerstone of innovation in information technology. AI tools are revolutionizing operations, efficiency, and decision-making across industries, but it also comes with potential pitfalls and security concerns. IT professionals will play a major role in ensuring that AI systems and tools are developed, implemented, and applied in ways that keep private data secure and don’t cause harm to individuals, groups, or society at large. An advanced IT education that incorporates cybersecurity concepts, such as the online Master of Science in Information Technology with a Cybersecurity specialization at Pace University, can help you prepare for the challenges of the future.Check out our other articles:
- IT Careers in a New World of Work
- Cybersecurity Specialist Jobs: The Path to a Cybersecurity Career
- Cybersecurity Career Options with an MS in Information Technology
Request
Information
To learn more about the online Master of Science in Computer Science program, fill out the fields in this form to download a free brochure. If you have any questions at any time, please contact an enrollment specialist at (914) 758-1080.