1. 程式人生 > >Learning and Leading in the Era of Artificial Intelligence and Machine Learning, Part 1

Learning and Leading in the Era of Artificial Intelligence and Machine Learning, Part 1

Learning and Leading in the Era of Artificial Intelligence and Machine Learning, Part 1

Wikimedia Commons

With this 2-part blog series, I’ll explore the evolving nature of how humans are learning from one another and we now have the opportunity to strengthen the leadership qualities in the era of artificial intelligence

and machine learning. In part 1, I’ll discuss the golden age of knowledge and learning that can make a significant impact to your business. In part 2, I’ll address the importance of negotiating conflicts, priorities and gaps, and how we can all rethink leadership in our lives with networks of human connections and machine-generated
insight.

Entering the golden age of learning

As adoption of artificial intelligence (AI) and machine learning (ML) becomes more pervasive, the way we live and work is being fundamentally altered. Embedding the “finely-tuned” predictive outcomes from machine learning into business processes can be a game-changer especially where modern and core applications are integrated to optimize and, in some cases, automate decision making. Furthermore, deep learning has shown that the right algorithms can surpass humans in scale and speed in the areas of natural language process and image recognition. These technologies are still evolving, but businesses can already foresee how they will need to reconfigure or even reimagine current ways of performing day-to-day tasks in order to transform customer relationships, reduce costs and risks and create room for future inventions and new business models. So what can we

humans, in the midst of this transformation, “learn” from this era of machine learning?

Let’s look first at how enterprise adoption of AI is advancing from relatively simple ML to more complex networked systems. Once a business tackles data aggregation and preparation, they can turn to machine learning, which has traditionally been focused on selecting the right algorithms then tweaking the few available parameters. However, we know that deep learning increases the complexity of model building: to first design neural networks and then tweak the dozens of interrelated parameters in those neural network structures. It’s worth pointing out that this is still narrow AI where we design the structures to be rigid and fixed on purpose.

But in order to put AI to work for business, we want basic algorithms, neural networks, data and support systems to work together as a system and that requires more flexibility. To put it differently, current challenges in AI are not about training a single model. Instead we need to architect and build out data-driven services that are simple and easy for users, so that they can put their ideas into practice and manage algorithms, data and applications while masking underlying complexity.

This platform approach is the foundation for Watson Studio, which supports a system of constantly evolving business assets, projects and communities. Once we start to apply AI technologies in a pervasive way, something interesting can happen. Namely, the speed of increasing knowledge can begin to outpace the rate of data explosion even as the curve gets steeper, as shown in Figure 1: Watson’s Law.

Figure 1: Watson’s Law

2018 can be an inflection point. As we enter a golden age of learning, organizations that emphasize training both machines and people can stay ahead of this inflection point. Doing so will mean grasping context, abstract reasoning, reacting to experience, prioritization, planning, and even generating original inventions or artistic breakthroughs that lead toward general AI. So what skills do you want to develop? Consider that on Watson Studio you can:

  • Code in R, Python, Scala and SQL and get first-class support in tools such as Jupyter Notebooks, and RStudio. Or bring your own libraries and set up re-usable environments or start with default templates based on Anaconda. Use SparkML, Tensorflow, Keras, Scikit Learn, XGBoost, Pytorch, ONNX and more.
  • Join a vibrant ecosystem and community with data scientists, cognitive researchers, business analysts & developers all in one environment. Drive innovation, collaboration and productivity-sharing assets in the Watson Knowledge Catalog.
  • Achieve faster time to value with up to 480% ROI with SPSS Modeler. Or easily train and customize pre-trained Watson APIs (transfer learning) for example, visual recognition or natural language processing.
  • Design your neural networks and monitor batch training experiments without worrying about log transfers and scripts to visualize results for deep learning.
  • Access open source and proprietary algorithms and tap IBM research innovation for auto-modeling and hyper parameter optimization.
  • Tap your hybrid cloud as a foundation for putting AI to work for your business. Serve your algorithms, data and applications in an environment of your choice along the data science lifecycle.

Balancing autonomous and collaborative learning… and sharing regardless

Let’s say you’re a data scientist assigned to use deep learning for image, text and audio processing — for instance, accelerating insurance claim processing from accidents. You already know how to code in several languages. You have experience in starting with experimentation, training, inference and all the way to production. You can use the open source tools, take advantage of visual recognition, design neural networks visually and even customize compute in the Watson Studio shown in Figure 2.

Chances are that you’re going to see many interesting results. You want to proactively use your knowledge and skills to form abstractions and observations. You and your colleagues are excited about figuring out how to keep clients happy, monitor insurance adjustors and complete transactions at lower cost and risk. You can even find ways to detect fraud using the same environment. Throughout the project, your team will be managing multiple priorities and KPI to optimize outcomes after you identify key predictors. This requires individual analysis, interactive discussions, and revisiting initial hypotheses.