Resources > Reports & eBooks > Video Production

What is AI?

The complete guide on Artificial Intelligence for employer brand, communications, and talent people.

Download now

Artificial intelligence

/ɑːtɪˈfɪʃ(ə)l/ /ɪnˈtɛlɪdʒ(ə)ns/

noun: artificial intelligence; noun: AI

The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

Source: Oxford Dictionary

A short history of AI

Artificial Intelligence as we know it began its journey in the mid 20th-Century when computers would play games such as checkers against mid-level players (and win), and the infamous ‘Turing Test’ was created.

This test was designed to test a machine’s ability to behave similarly to, or on par with, human intelligence. It tests whether an AI can successfully hold a conversation with a human without the person realising it’s talking to a computer.

The U.S. Department of Defence saw potential in AI and funded many projects. But progress was slow, and aside from brief revivals in the 1980s and 1990s, interest in AI declined – a period known as the ‘AI Winter’.

AI only really began to see success in the 2010s thanks to a combination of increased theoretical knowledge and affordable computer power, which meant that large volumes of data could be saved and processed far more efficiently.

How did AI evolve?

In the 1950s, Scientists hoped to replicate the human mind - using information to teach computers how to solve problems. Neural networks were conceptualised and designed - these mimicked the human brain, enabling them to be trained to learn complex patterns.

Thanks to the vast improvement in computing power, today the most common approach is Deep Learning, or more specifically ‘Deep Neural Networks’. This is a subfield of artificial intelligence and machine learning.

Data from a database is transported through a system of layers that can be rather ‘deep’, hence the name.

For example, an image of a cat is processed in four layers.

The input is a matrix of numbers. In the first input, edges and lines are encoded. In the next layer, edges and lines are connected. The third layer encodes the eyes and ears, and in the last layer the cat is recognised.

The code can learn what to process in each layer on its own. The challenge for humans is finding the best number and size of the layers in order to get the best accuracy rate.

How does AI work?

Here we should note the difference between AI and Machine Learning. They are part of the same family, and often used interchangeably, but they’re not identical.

AI can be seen as a set of technologies designed to mimic human thought-patterns, while Machine Learning is the process of applying AI to a computer to allow it to grow and learn without human involvement.

Google defines them thusly:

  • AI is the broader concept of enabling a machine or system to sense, reason, act, or adapt like a human
  • ML is an application of AI that allows machines to extract knowledge from data and learn from it autonomously

Source Google: AI vs Machine Learning

So, computers need external information to learn from, which is supplied either from a database or in the form of rewards. AI uses vast amounts of data to predict patterns and present an output based upon an input.

Three ways of learning are distinguished:

Supervised learning

Supervised learning needs a database that is already classified, meaning each input has a result; a so-called label. As an example, images of elephants and zebras are labelled ‘elephant’ and ‘zebra’ respectively. The computer then learns patterns for both ‘elephant’ and ‘zebra’ and can recognise these in new images. This form of learning is commonly used to train computers to carry out specific tasks like object recognition. Research shows that the classification works best with a large database and clear labels.

Unsupervised learning

For unsupervised learning, the input data is unlabelled, and therefore the computer must find the pattern itself.

Reinforced learning

A third option is to give feedback to the computer each time it carries out a task, allowing it to learn what is ‘good’ or ‘bad’. The computer will first start with random decisions and feedback is given at each step. In this way the computer learns through rewards and penalties. This form of learning can be used in training a computer to play a game, such as Chess or Go, and even beat world champions.

What can go wrong?

There is a famous expression in AI: garbage in, garbage out.

For supervised learning, it is generally considered that classification results are bad when the dataset consists either of only a few inputs or where the labels are poorly assigned.

For example, in the “zebra” explanation, there may be two images of a zebra, but one labelled ‘elephant’. From this input computers cannot learn anything.

Furthermore, bad datasets can be biased, miss diversity, or contain incorrect data.

As an example, in credit scoring, the training is biased if the database only holds information on clients who have credit but not on those who do not have credit.

Another example of bias could be training a code to distinguish between male and female behaviour based on a database that mainly holds information on men.

For reinforced learning, feedback has to be accurate, otherwise we run the risk of the computer learning wrong decisions.

What is crucial for success with deep learning?

For supervised and unsupervised learning, the most crucial point is the database. It is essential to base the training on the biggest database available. In competition, even when using the same techniques, the best dataset will always win: the larger the database, the better the result will be.

For supervised learning, it is also crucial to use the correct labels, as seen in the previous example with ‘elephants’ and ‘zebras’. Adding new data to the database ensures the computer continuously improves its learning.

In reinforced learning, the accuracy of the output depends on the quality of the feedback.

Where do we use AI today?

Today AI is used in a number of applications, many of which we take for granted or simply don’t know are AI-driven, such as speech recognition (e.g. Siri), self-driving cars and algorithms showing us personalised content online. One of the newest applications is in chatbots, and augmented reality is beginning to gain traction.

There are other areas in which AI operates that are not commonly known. In banking, AI is used for financial fraud detection and credit scoring.

In medicine and health care, AI is used for tumour finding and as a health care assistant. In fact, one study showed that a trained AI code using deep learning could diagnose breast cancer better than a group of pathologists.

In manufacturing, AI is used to catch inefficiencies, optimise the entire operations process and receive real-time production insights from the floor.

There are also instances of a more creative approach to AI, where data from different disciplines is combined, like generating information on the economy by counting swimming pools from images, or making investments in agriculture by analysing satellite images.

What is Generative AI?

The latest iteration of Deep Learning is Generative AI. Rather than just ‘rearranging’ existing data, it can take text based prompts and create something entirely new- a major breakthrough in the AI field. The best known examples of existing engines are ChatGPT and Dall-E.

You may have heard the term ‘Large Language Models (LLM) used in association with Generative AI. LLM is an umbrella term for the technology behind the latest iteration of AI.

The power of LLMs lies both in the vast amount of data it can utilise and that it can be adapted quickly for a wide range of tasks without needing task-specific training. They can adapt the output contextually, based upon previous input.

Thanks to these new-found talents, AI is now closer than ever to being able to pass the Turing Test.

What are the limits for AI?

AI is primarily used to help us perform better at what we are doing. As the models and engines continue to evolve, it will likely have a marked impact on how we accomplish certain tasks.

Computers are trained to perform specific tasks and not a diversity of simultaneous tasks. A code that learnt to distinguish an elephant from a zebra will not be able to play chess or even recognise a dog.

So you see, computers learn very differently from how humans learn, and they will probably never be as diverse thinkers as human beings. But although each AI is trained to perform only one task, this one task is carried out in a very short time with high precision (if the AI is well-trained by us, of course).

For this single task, the computer is likely to be faster and less prone to errors than our brain, which is slower, and in many cases, can be distracted and make misleading assumptions.

AI is currently at a stage where it is mimicking, like a parrot. This is known as “narrow” AI, as it has a narrow focus. It has been engineered to become incredibly good at this, and there are still further developments to be made in this space.

However, it lacks the ability to understand nuance, complex concepts or dealing with unusual or unexpected situations.

When AI can understand the unexpected, we will have reached what is referred to as “general” AI, which effectively refers to sentience. We’re currently a long way off from this.

AI and ethics

With great power comes great responsibility, and AI can be used for both good and bad. It can be used for both detecting and committing fraud, as well as recommending systems and influencing people’s behaviours.

We can do so much with AI, but it raises a lot of questions. Who is responsible if a surgery robot kills the patient or a self-driven car injures a pedestrian?

Insurance companies must now be prepared for those instances.

Since the launch of Generative AI and the explosion into mainstream usage, this has already begun to be tested with a number of copyright cases around IP of the output of AI.

Especially in the creative space (e.g. art, music and writing), the data that the engines are trained on is owned by someone - so who owns the output? The engine (and therefore the company that built it) or the original artist(s).

There will come a time when no one will be able to control the use of AI just as much as we can’t control what is on the internet. Each of us should make sure we abide by ethical standards, but shouldn’t we do that anyway, wherever we are?

And how good are we at that?

AI at Seenit

How we use AI

“Generative AI gets you to the fun bit, faster”

Ian Merrington, CTO, Seenit

AI has become a useful and integral part of the SeenIt platform. It’s currently utilised in ways to make using the software more intuitive and provide videos to give employees a voice, quicker: Machine-powered meets People-powered

AI transcribing

Utilise the Seenit AI built-in transcription tool to easily add subtitles to your videos, ensuring your videos are accessible to everyone. Google indexes subtitles, offering you the chance to boost your SEO as well as your employee branding!

AI translating

Easily create translated subtitles for your video. Simply select which language you wish to generate subtitles in, and the SeenIt platform will do the rest.

AI filming ideas

Sometimes it can be hard to know where to start. You set the brief, and Seenit’s AI can offer your contributors suggestions of what to film, and where to film it, to achieve the best possible results.

AI video description suggestions

Describing a video for social can be tough, but with Seenit AI, intelligent video descriptions can be created at the click of a button,

In addition

There is plenty more on our roadmap - too much to feature here! The future of Seenit involves more features that offer AI-enhanced brand-safe video editing, allowing you to easily create the best video possible, in the most effective way.

What’s next?

“The possibility to merge our own authentic material with a beautiful AI generated transition that makes it look like it’s been shot by Spielberg is exciting, as long as the authenticity is still there from the people”

Ian Merrington, CTO, Seenit

AI is here to stay and the key to its success lies in responsible and intelligent usage. As the tech evolves and becomes more commonplace, we want it to help not hinder output and to amplify voices, not dull them.

Recent progression with generative AI has exciting possibilities for creating beautiful content, faster. These tools need to be used alongside human, authentic content to elevate what is already on offer.

In a time when 73% of Employer Brand professionals say that AI has saved them time in their role, Seenit is determined to build a service that continues down this path.

We are ensuring that the 75% of employer brand professionals who are concerned that the over-reliance on AI might diminish the human touch in Employer Branding, are provided reassurance that we are improving their workload.

Seenit will continually evolve the platform to utilise AI to save time, offer ideas and empower companies to create the best possible Employer Brand content.

Recommended for you

View all