With all the excitement around ChatGPT, it's easy to lose sight of the unique risks of generative AI. Large language models, a form of generative AI, are really good at helping people who struggle with writing English prose. It can help them unlock the written word at low cost and sound like a native speaker.But because they're so good at generating the next syntactically correct word, large language models may give a false impression that they possess actual understanding or meaning. The results can include a flagrantly false narrative directly as a result of its calculated predictions versus a true understanding.So ask yourself: What is the cost of using an AI that could spread misinformation? What is the cost to your brand, your business, individuals or society? Could your large language model be hijacked by a bad actor? Let me explain how you can reduce your risk.It falls into four areas: Hallucinations, Bias, Consent, and Security. As I present each risk, I'll also call out the strategies you can use to mitigate these risks. You ready? Let's start with the falsehoods, often referred to as "AI hallucinations".Quick sidebar -- I really don't like the word "hallucinations" because I fear it anthropomorphizes AI. I'll explain it a bit. Okay, you've probably heard the news reports of large language models claiming they're human, or claiming they have emotions, or just stating things that are factually wrong. What's actually going on here?Well, large language models predict the next best syntactically correct word, not accurate answers based on understanding of what the human is actually asking for. Which means it's going to sound great, but might be 100% wrong in its answer. This wrong answer is a statistical error. Let's take a simple example.Who authored the poems A, B, C? Let's say they were all authored by the poet X, but there's one source claiming it was the author Z. We have conflicting sources in the training data. Which one actually wins the argument?Even worse, there may not be a disagreement at all, but again, a statistical error. The response could very well be incorrect because again, the large language models do not understand meaning; these inaccuracies can be exceptionally dangerous. It's even more dangerous when you have large language models annotate its sources for totally bogus answers.Why? Because it gives the perception it has proof when it just doesn't have any. Imagine a call center that has replaced its personnel with a large language model, and it offers a factually wrong answer to a customer. Right, here's your factually wrong answer.Now, imagine how much angrier this customer will be when they can't actually offer a correction via a feedback loop. This brings us to our first mitigation strategy: Explainability. Now, you could offer inline explainability and pair a large language modelWith the system that offered real data and data lineage and provenance via a knowledge graph. Why did the model say what it just said? Where did it pull its data from? Which sources? The large language model could provide variations on the answer that was offered by the knowledge graph. Next risk: Bias.Do not be surprised if the output for your original query only lists white male Western European poets. Want a more representative answer? Your prompt would have to say something like, "Can you please give me a list of poets that include women and non-Western Europeans?"Don't expect the large language model to learn from your prompt. This brings us to the second mitigation strategy: Culture and Audits. Okay, culture is what people do when no one is looking.It starts with approaching this entire subject with humility, as there is so much that has to be learned and even, I would say, unlearned. You need teams that are truly diverse and multidisciplinary in nature working on AI because AI is a great mirror into our own biases.Let's take the results of our audits of AI models and make corrections to our own organizational culture when there are disparate outcomes. Audit pre-model deployment as well as post-model deployment. Okay, next risk is Consent. Is the data that you are curating representative? Was it gathered with consent? Are there copyright issues?Right! Here's a little copyright symbol. These are things we can and should ask for. This should be included in an easy to find, understandable fact sheet. Oftentimes we subjects, we have no idea where the heck the training data came from these large language models.Where we were that gathered from? Did the developers hoover the dark recesses of the Internet? To mitigate consent-related risk, we need combined efforts of auditing and accountability. Right! Accountability includes establishing AI governance processes, making sure you are compliant to existing laws and regulations,And you're offering ways for people to have their feedback incorporated. Now on to the final risk, Security. Large language models could be used for all sorts of malicious tasks, including leaking people's private information, helping criminals phish, spam, scam.Hackers have gotten AI models to change their original programming, endorsing things like racism, suggesting people do illegal things. It's called jailbreaking. Another attack is an indirect prompt injection. That's when a third party alters a website, adding hidden data to change the AI's behavior. The result?Automation relying on AI potentially sending out malicious instructions without you even being aware. This brings us to our final mitigation strategy, and the one that actually pulls all of this together, and that is education. All right, let me give you an example.Training a brand new large language model produces as much carbon as over a 100 roundtrip flights between New York and Beijing. I know, crazy, right? This means it's important that we know the strengths and weaknesses of this technology. It means educating our own people on principles for the responsible curation of AI,The risks, the environmental cost, the safeguard rails, as well as what the opportunities are. Let me give you another example of where education matters. Today, some tech companies are just trusting that large language models training data has not been maliciously tampered with.I can buy a domain myself now and fill it with bogus data. By poisoning the dataset with enough examples, you could influence a large language model's behavior and outputs forever. This tech isn't going anywhere. We need to think about the relationship that we ultimately want to have with AI.If we're going to use it to augment human intelligence, we have to ask ourselves the question: What is the experience like of a person who has been augmented? Are they indeed empowered? Help us make education about the subject of data and AI far more accessible and inclusive than it is today.We need more seats at the table for different kinds of people with varying skill sets working on this very, very important topic. Thank you for your time.
My smartwatch tracks how much sleep I get each night. If I'm feeling curious, I can look at my phone and see my nightly slumber plotted on a graph. It might look something like this. And on the graph, on the y-axis, we have the hours of sleep.And then on the x-axis, we have days. And this is an example of a "time series". And what a time series is, is data of the same entity, like my sleep hours, collected at regular intervals like over days. And when we have time series we can perform a "time series analysis".And this is where we analyze the timestamp data to extract meaningful insights and predictions about the future.And while it's super useful to forecast that I'm going to probably get like 7 hours of shut eye tonight based on the data, time series analysis plays a significant role in helping organizations drive better business decisions.So for example, using time series analysis, a retailer can use this functionality to predict future sales and optimize their inventory levels. Conversely, if you're into purchasing, a purchaser can use time series analysis to predict commodity prices and make informed purchasing decisions.And then in fields like agriculture, we can use time series analysis to predict weather patterns, influencing decisions on harvesting and when to plant. So let's, first of all, introduce number one: the components of time series analysis.And then number two, we're going to take a look at some of the forecasting models for performing time series analysis. And then number three, we're going to talk about how to implement some of this stuff. Okay. Now let's talk about the components, first of all. And one component is called "trend".Now this component refers to the overall direction of the data over time, whether it's increasing, whether it's decreasing, perhaps it's staying the same. So you can think of it like a line on the graph that's either going up, or going down or staying flat. That's the first component. The second one: "seasonality".Now, this component is a repeating pattern of data over a set period of time, like the way that retail sales spike during the holiday season. So we might see a spike and then a bit lower, the spike is back - and it keeps repeating like that. That is seasonality. Third component, that's "cycle".And cycle refers to repeating but non-seasonal patterns in the data. So these might be economic booms and busts that happen over several years or maybe even decades. So it's a much smoother curve. And then lastly, there is "variation".And variation refers to the unpredictable ups and downs in the data that cannot be explained by these other components. And this component is also known as "irregularity" or "noise" and, well, it looks like maybe that ... yeah, very difficult to pick out the trend.So those are some of the components of time series. But let's talk about the forecasting models that we can use to perform some analysis, and there are several popular forecasting models out there. One of the most well known is called the ARIMA model. Now, ARIMA, that stands for "Auto Regressive, Integrated Moving Average".And the model is made up of three components. So there's the the "AR" part, that's the Auto Regressive component and that looks at how past values affect future values. Then there's the "I", for "Integrated" or differencing component, and that accounts for trends and seasonality.And then there is the "MA" component that's the Moving Average component, and that smooths that the noise by removing non-deterministic or random movements from time series. So that's ARIMA. Another pretty popular one you'll often see is called "exponential smoothing".And exponential smoothing, this model is used to forecast time series data that doesn't have a clear trend or seasonality, so it doesn't fit into these kind of areas. And this model works by smoothing out the data, by giving more weight to recent values and less weight to older values.And there are many other forecasting models out there, and the right one to use, of course, depends on the data you're working with and the specific problem you're trying to solve. Okay, so let's finally talk a little bit about implementation. How do we implement this?There are several software packages out there that can help you perform time series analysis and forecasting, such as those with R and Python and MATLAB. So if we just focus in on Python for a moment. Two of the most popular libraries for time series analysis in Python: firstly, Pandas ...And secondly, a library called Matplotlib. With Pandas, you can easily import, manipulate and analyze the time series data and it can handle things like missing values, aggregate data, and perform statistical analysis on the data. Matplotlib is a library that can help you visualize the time series data.You can create line charts or scatter plots and heatmaps. Using these libraries, you can perform a wide range otime series analysis tasks like data cleaning, exploratory data analysis and modeling. You can use Pandas to pre-process your time series data and then use Matplotlib to visualize the trends and seasonality in that data.By understanding the components of a time series and then choosing the right forecasting model, you can make more informed decisions and gain a competitive advantage.So whether you're a data analyst or a business owner, or just a curious sleeper, take advantage of the Power of time series analysis and get a glimpse into what the future may hold. If you have any questions, please drop us a line below.And if you want to see more videos like this in the future, please like and subscribe. Thanks for watching.
Over the past couple of months, large language models, or LLMs, such as chatGPT, have taken the world by storm. Whether it's writing poetry or helping plan your upcoming vacation, we are seeing a step change in the performance of AI and its potential to drive enterprise value. My name is Kate Soule.I'm a senior manager of business strategy at IBM Research, and today I'm going to give a brief overview of this new field of AI that's emerging and how it can be used in a business setting to drive value.Now, large language models are actually a part of a different class of models called foundation models. Now, the term "foundation models" was actually first coined by a team from Stanford when they saw that the field of AI was converging to a new paradigm. Where before AI applications were being built by training,Maybe a library of different AI models, where each AI model was trained on very task-specific data to perform very specific task. They predicted that we were going to start moving to a new paradigm,Where we would have a foundational capability, or a foundation model, that would drive all of these same use cases and applications. So the same exact applications that we were envisioning before with conventional AI, and the same model could drive any number of additional applications.The point is that this model could be transferred to any number of tasks. What gives this model the super power to be able to transfer to multiple different tasks and perform multiple different functions is that it's been trained on a huge amount, in an unsupervised manner, on unstructured data.And what that means, in the language domain, is basically I'll feed a bunch of sentences-- and I'm talking terabytes of data here --to train this model. And the start of my sentence might be "no use crying over spilled" and the end of my sentence might be "milk".And I'm trying to get my model to predict the last word of the sentence based off of the words that it saw before. And it's this generative capability of the model-- predicting and generating the next word --based off of previous words that it's seen beforehand,That is why that foundation models are actually a part of the field of AI called generative AI because we're generating something new in this case, the next word in a sentence.And even though these models are trained to perform, at its core, a generation past, predicting the next word in the sentence, we actually can take these models, and if you introduce a small amount of labeled data to the equation, you can tune them to perform traditional NLP tasks-- things like classification, orNamed-entity recognition --things that you don't normally associate as being a generative-based model or capability. And this process is called tuning. Where you can tune your foundation model by introducing a small amount of data, you update the parameters of your model and now perform a very specific natural language task.If you don't have data, or have only very few data points, you can still take these foundation models and they actually work very well in low-labeled data domains. And in a process called prompting or prompt engineering, you can apply these models for some of those same exact tasks.So an example of prompting a model to perform a classification task might be you could give a model a sentence and then ask it a question: Does this sentence have a positive sentiment or negative sentiment?The model's going to try and finish generating words in that sentence, and the next natural word in that sentence would be the answer to your classification problem, which would respond either positive or negative, depending on where it estimated the sentiment of the sentence would be.And these models work surprisingly well when applied to these new settings and domains. Now, this is a lot of where the advantages of foundation models come into play. So if we talk about the advantages, the chief advantage is the performance. These models have seen so much data.Again, data with a capital D-- terabytes of data --that by the time that they're applied to small tasks, they can drastically outperform a model that was only trained on just a few data points. The second advantage of these models are the productivity gains.So just like I said earlier, through prompting or tuning, you need far less label data to get to task-specific model than if you had to start from scratch because your model is taking advantage of all the unlabeled data that it saw in its pre-training when we created this generative task.With these advantages, there are also some disadvantages that are important to keep in mind. And the first of those is the compute cost. So that penalty for having this model see so much data is that they're very expensive to train,Making it difficult for smaller enterprises to train a foundation model on their own. They're also expensive-- by the time they get to a huge size, a couple billion parameters --they're also very expensive to run inference.You might require multiple GPUs at a time just to host these models and run inference, making them a more costly method than traditional approaches. The second disadvantage of these models is on the trustworthiness side.So just like data is a huge advantage for these models, they've seen so much unstructured data, it also comes at a cost, especially in the domain like language. A lot of these models are trained basically off of language data that's been scraped from the Internet.And there's so much data that these models have been trained on. Even if you had a whole team of human annotators, you wouldn't be able to go through and actually vet every single data point to make sure that it wasn't biased and didn't contain hate speech or other toxic information.And that's just assuming you actually know what the data is. Often we don't even know-- for a lot of these open source models that have been posted --what the exact datasets are that these models have been trained on leading to trustworthiness issues. So IBM recognizes the huge potential of these technologies.But my partners in IBM Research are working on multiple different innovations to try and improve also the efficiency of these models and the trustworthiness and reliability of these models to make them more relevant in a business setting.All of these examples that I've talked through so far have just been on the language side. But the reality is, there are a lot of other domains that foundation models can be applied towards.Famously, we've seen foundation models for vision --looking at models such as DALL-E 2, which takes text data, and that's then used to generate a custom image. We've seen models for code with products like Copilot that can help complete code as it's being authored. And IBM's innovating across all of these domains.So whether it's language models that we're building into products like Watson Assistant and Watson Discovery, vision models that we're building into products like Maximo Visual Inspection, or Ansible code models that we're building with our partners at Red Hat under Project Wisdom. We're innovating across all of these domains and more.We're working on chemistry. So, for example, we just published and released molformer, which is a foundation model to promote molecule discovery or different targeted therapeutics. And we're working on models for climate change, building Earth Science Foundation models using geospatial data to improve climate research.I hope you found this video both informative and helpful. If you're interested in learning more, particularly how IBM is working to improve some of these disadvantages, making foundation models more trustworthy and more efficient, please take a look at the links below. Thank you.