《未来的智能世界中,谁掌握了计算机谁就掌握了世界演进的武器》

举报
sunkingfs 发表于 2023/10/23 04:15:54 2023/10/23
【摘要】 现在科技发展的太快,也越来越复杂,各行各业都需要计算机的辅助,甚是是主导,这篇文章可以大致知道计算机科学的技术分类,有一个计算机领域的概况。
《未来的智能世界中,谁掌握了计算机谁就掌握了世界演进的武器》
《In the future intelligence world, whoever masters computers will master the weapons of world evolution》

“计算机天生就是用来解决以前没有过的问题的。” --- 比尔.盖茨(微软公司创始人)
“Computers are designed to solve problems that have never been solved before.” --- Bill Gates (Founder of Microsoft)

We built computers to expand our brains. Originally scientists build computers to solve arithmetic, but they turned out to be incredibly useful for many other things as well: running the entire internet, lifelike graphics, artificial brains or simulating the Universe, but amazingly all of it boils down to just flipping zeros and ones. Computers have become smaller and more powerful at an incredible rate. There is more computing power in your cell phone then there was in the entire world in the mid 60s. And the entire Apollo moon landing could have been run on a couple of Nintendos.
我们制造计算机是为了扩展我们的大脑,最初,科学家们建造计算机是为了解决算术问题,但事实证明它们对于许多其他事情也非常有用:运行整个互联网、逼真的图形、人造大脑或模拟宇宙,但令人惊讶的是,所有这些都归结为只是翻转0和1。 计算机以令人难以置信的速度变得更小、更强大。 你的手机的计算能力比 60 年代中期整个世界的计算能力还要强大,整个阿波罗登月可以在几台任天堂游戏机上运行。
Computer science is the subject that studies what computers can do. It is diverse and overlapping field but I’m going to split it into three parts. The fundamental theory of computer science, computer engineering, and Applications. We’ll start with the father of theoretical computer science: Alan Turing, who formalised the concept of a Turing machine which is a simple description of a general purpose computer. People came up with other designs for computing machines but they are all equivalent to a Turing machine which makes it the foundation of computer science. A Turing machine contains several parts: An infinitely long tape that is split into cells containing symbols. There is also a head that can read and write symbols to the tape, a state register that stores the state of the head and a list of possible instructions. In todays computers the tape is like the working memory or RAM, the head is the central processing unit and the list of instructions is held in the computer’s memory. Even though a Turing machine is a simple set of rules, it’s incredibly powerful, and this is essentially what all computers do nowadays. Although our computers obviously have a few more parts like permanent storage and all the other components.
计算机科学是研究计算机能做什么的学科。 这是一个多样化且重叠的领域,但我将其分为三个部分: 计算机科学、计算机工程和应用的基础理论。 我们将从理论计算机科学之父:艾伦·图灵开始,他形式化了图灵机的概念,这是对通用计算机的简单描述。 人们提出了其他计算机器设计,但它们都相当于图灵机,这使其成为计算机科学的基础。 图灵机包含几个部分: 无限长的磁带,被分成包含符号的单元。还有一个可以在磁带上读取和写入符号的磁头、一个存储磁头状态和可能指令列表的状态寄存器。 在当今的计算机中,磁带就像工作存储器或 RAM,磁头是中央处理单元,指令列表保存在计算机的内存中。尽管图灵机是一组简单的规则,但它的功能却非常强大,这基本上就是当今所有计算机所做的事情。 尽管我们的计算机显然还有更多部件,例如永久存储和所有其他组件。
Every problem that is computable by a Turing machine is computable using Lambda calculus which is the basis of research in programming languages. Computability Theory attempts to classify what is and isn’t computable. There are some problems that due to their very nature, can never be solved by a computer, a famous example is the halting problem where you try to predict whether a program will stop running or carry on forever. There are programs where this is impossible to answer by a computer or a human. Many problems are theoretically solvable but in practice take too much memory or more steps than lifetime of the Universe to solve, and computational complexity attempts to categorise these problems according to how they scale. There are many different classes of complexity and many classes of problem that fall into each type. There are a lot of real world problems that fall into these impossible to solve categories, but fortunately computer scientists have a bunch of sneaky tricks where you can fudge things and get pretty good answers but you’ll never know if they are the best answer. An algorithm is a set of instructions independent of the hardware or programming language designed to solve a particular problem. It is kind of like a recipe of how to build a program and a lot of work is put into developing algorithms to get the best out of computers. Different algorithms can get to the same final result, like sorting a random set of numbers into order, but some algorithms are much more efficient than others, this is studied in O(n) complexity.
图灵机可计算的每个问题都可以使用 Lambda 演算来计算,Lambda 演算是编程语言研究的基础。 可计算性理论试图对可计算和不可计算进行分类。 有些问题由于其本质而无法由计算机解决,一个著名的例子是暂停问题,您试图预测程序是否会停止运行或永远继续运行。有些程序是计算机或人类无法回答的。 许多问题在理论上是可以解决的,但实际上需要太多的内存或比宇宙寿命更多的步骤来解决,并且计算复杂性试图根据问题的规模对这些问题进行分类。 有许多不同类别的复杂性,并且每种类型都有许多类别的问题。 现实世界中有很多问题都属于这些无法解决的类别,但幸运的是,计算机科学家有很多狡猾的技巧,你可以捏造一些东西并得到很好的答案,但你永远不知道它们是否是最好的答案。 算法是一组独立于硬件或编程语言的指令,旨在解决特定问题。它有点像如何构建程序的秘诀,并且需要投入大量工作来开发算法以充分利用计算机。 不同的算法可以得到相同的最终结果,例如将一组随机数字按顺序排序,但某些算法比其他算法更有效,这是在 O(n) 复杂度中进行研究的。
Information theory studies the properties of information and how it can be measured, stored and communicated. One application of this is how well you can compress data, making it take up less memory while preserving all or most of the information but there are lots of other applications. Related to information theory is coding theory. And Cryptography is obviously very important for keeping information sent over the internet secret. There are many different encryption schemes which scramble data and usually rely on some very complex mathematical problem to keep the information locked up. These are the main branches of theoretical computer science although there are many, more I don’t have time to go into Logic, Graph Theory, Computational Geometry, Automata Theory, Quantum Computation, Parallel Programming, Formal Methods and Datastructures, but lets move on to computer engineering.
信息论研究信息的属性以及如何测量、存储和传播信息。 其应用之一是如何更好地压缩数据,使其占用更少的内存,同时保留全部或大部分信息,但还有许多其他应用程序。 与信息论相关的是编码理论。 密码学对于保持通过互联网发送的信息的秘密显然非常重要。有许多不同的加密方案会扰乱数据,并且通常依赖于一些非常复杂的数学问题来锁定信息。 这些是理论计算机科学的主要分支,尽管有很多,更多我没有时间深入讨论逻辑、图论、计算几何、自动机理论、量子计算、并行编程、形式方法和数据结构,但让我们继续 到计算机工程。
Designing computers is difficult because they have to do so many different things. Designers need to try and make sure they are capable of solving many different kinds of problem as optimally as possible. Every single task that is run on the computer goes through the core of the computer: the CPU. When you are doing lots of different things at the same time, the CPU needs to switch back and forth between these jobs to make sure everything gets done in a reasonable time. This is controlled by a scheduler, which chooses what to do when and tries to get through the tasks in the most efficient way, which can be very difficult problem. Multiprocessing helps speed things up because the CPU has several cores that can execute multiple jobs in parallel. But this makes the job of the scheduler even more complex. Computer architecture is how a processor is designed to perform tasks and different architectures are good at different things. CPUs are general purpose, GPUs are optimised for graphics and FPGAs can be programmed to be very fast at a very narrow range of task.
设计计算机很困难,因为它们必须做很多不同的事情。 设计师需要尝试并确保他们能够尽可能最佳地解决许多不同类型的问题。 计算机上运行的每个任务都经过计算机的核心:CPU。 当你同时做很多不同的事情时,CPU 需要在这些工作之间来回切换,以确保所有事情都能在合理的时间内完成。这是由调度程序控制的,调度程序选择何时做什么并尝试以最有效的方式完成任务,这可能是一个非常困难的问题。 多处理有助于加快速度,因为 CPU 有多个内核可以并行执行多个作业。但这使得调度程序的工作变得更加复杂。 计算机架构是处理器执行任务的设计方式,不同的架构擅长不同的事情。 CPU 是通用的,GPU 针对图形进行了优化,而 FPGA 可以编程为在非常窄的任务范围内非常快。
On top of the raw hardware there are many layers of software, written by programmers using many different programming languages. A programming language is how humans tell a computer what to do and they vary greatly depending on the job at hand from low level languages like assembly through to high level languages like python or javascript for coding websites and apps. In general, the closer a language is to the hardware, the more difficult it is for humans to use. At all stages of this hierarchy the code that programmers write needs to be turned into raw CPU instructions and this is done by one or several programs called compilers. Designing programming languages and compilers is a big deal, because they are the tool that software engineers use to make everything and so they need to be as easy to use as possible but also be versatile enough to allow the programmers to build their crazy ideas. The operating system is the most important piece of software on the computer as it is what we interact with and it controls how all of the other programs are run on the hardware, and engineering a good operating system is a huge challenge.
在原始硬件之上有许多层软件,由程序员使用多种不同的编程语言编写。 编程语言是人类告诉计算机要做什么的方式,根据手头的工作,它们有很大的不同,从低级语言(如汇编)到高级语言(如用于编码网站和应用程序的 python 或 javascript)。 一般来说,一种语言越接近硬件,人类使用起来就越困难。在这一层次结构的所有阶段,程序员编写的代码都需要转换为原始 CPU 指令,这是由一个或多个称为编译器的程序完成的。 设计编程语言和编译器是一件大事,因为它们是软件工程师用来制造一切的工具,因此它们需要尽可能易于使用,但也要足够通用,以允许程序员构建他们的疯狂想法。 操作系统是计算机上最重要的软件,因为它是我们与之交互的部分,它控制着所有其他程序在硬件上的运行方式,设计一个好的操作系统是一个巨大的挑战。
This brings us to software engineering: writing bundles of instructions telling the computer what to do. Building good software is an art form because you have to translate your creative ideas into logical instructions in a specific language, make it as efficient as possible to run and as free of errors as you can. So there are many best practices and design philosophies that people follow. Some other important areas are getting many computers to communicate and work together together to solve problems. Storing and retrieving large amounts of data. Determining how well computer systems are performing at specific tasks, and creating highly detailed and realistic graphics.
这给我们带来了软件工程:编写一组指令告诉计算机要做什么。 构建优秀的软件是一种艺术形式,因为您必须将您的创意转化为特定语言的逻辑指令,使其尽可能高效地运行并且尽可能没有错误。 因此,人们遵循许多最佳实践和设计理念。 其他一些重要领域是让许多计算机进行通信并一起工作来解决问题。 存储和检索大量数据。 确定计算机系统在特定任务上的执行情况,并创建高度详细和逼真的图形。
Now we get to a really cool part of computer science, getting computers to solve real world problems. These technologies underlie a lot of the programs, apps and websites we use. When you are going on vacation and you want to get the best trip for the money you are solving an optimisation problem. Optimisation problems appear everywhere and finding the best path or most efficient combination of parts can save businesses millions of dollars. This is related to Boolean Satisfiability where you attempt to work out if a logic formula can be satisfied or not. This was the first problem proved to be NP-complete and and so widely considered to be impossible to solve, but amazing development of new SAT solvers means that huge SAT problems are solved routinely today especially in artificial intelligence. Computers extend our brains multiply our cognitive abilities.
现在我们进入计算机科学的一个非常酷的部分,让计算机解决现实世界的问题。这些技术是我们使用的许多程序、应用程序和网站的基础。 当你去度假并且想要花最少的钱获得最好的旅行时,你正在解决一个优化问题。 优化问题随处可见,找到最佳路径或最有效的部件组合可以为企业节省数百万美元。 这与布尔可满足性相关,您尝试计算逻辑公式是否可以满足。 这是第一个被证明是 NP 完全的问题,因此被广泛认为是不可能解决的,但新 SAT 求解器的惊人发展意味着巨大的 SAT 问题今天已经得到常规解决,尤其是在人工智能领域。 计算机扩展了我们的大脑,增强了我们的认知能力。
The forefront of computer science research is developing computer systems that can think for themselves: Artificial Intelligence. There are many avenues that AI research takes, the most prominent of which is machine learning which aims to develop algorithms and techniques to enable computers to learn from large amounts of data and then use what they’ve learned to do something useful like make decisions or classify things. And there are many different types of machine learning. Closely related are fields like computer vision, trying to make computers able to see objects in images like we do, which uses image processing techniques. Natural language processing aims to get computers to understand and communicate using human language, or to process large amounts of data in the form of words for analysis. This commonly uses another field called knowledge representation where data is organised according to their relationships, like words with similar meanings are clustered together. Machine learning algorithms have improved because of the large amount of data we give them. Big data looks at how to manage and analyse large amounts of data to get value from it. And will get even more data from the Internet of Things, adding data collection and communications to everyday objects. Hacking is not a traditional academic discipline but definitely worth mentioning. Trying to find weaknesses in computer systems, and take advantage of them without being noticed.
计算机科学研究的前沿是开发能够自我思考的计算机系统:人工智能。 人工智能研究采用多种途径,其中最突出的是机器学习,旨在开发算法和技术,使计算机能够从大量数据中学习,然后利用它们学到的知识来做一些有用的事情,例如做出决策或 对事物进行分类。 机器学习有许多不同类型。 密切相关的是计算机视觉等领域,试图让计算机能够像我们一样看到图像中的物体,它使用图像处理技术。自然语言处理旨在让计算机使用人类语言来理解和交流,或者以文字的形式处理大量数据进行分析。 这通常使用另一个称为知识表示的领域,其中数据根据其关系进行组织,就像具有相似含义的单词聚集在一起一样。 由于我们提供了大量数据,机器学习算法得到了改进。大数据着眼于如何管理和分析大量数据以从中获取价值。 并将从物联网获取更多数据,为日常物品增加数据收集和通信功能。 黑客攻击不是一门传统的学科,但绝对值得一提。 试图找到计算机系统的弱点,并在不被注意的情况下利用它们。
Computational Science uses computers to help answer scientific questions from fundamental physics to neuroscience, and often makes use of Super Computing which throws the weight of the worlds most powerful computers at very large problems, often in the area of Simulation. Then there is Human Computer Interaction which looks at how to design computer systems to be easy and intuitive to use. Virtual reality, Augmented Reality and Teleprescence enhancing or replacing our experience of reality. And finally Robotics which gives computers a physical embodiment, from a roomba to trying to make intelligent human like machines.
计算科学使用计算机来帮助回答从基础物理学到神经科学的科学问题,并且经常利用超级计算,将世界上最强大的计算机用于解决非常大的问题,通常是在模拟领域。 然后是人机交互,它着眼于如何设计易于使用且直观的计算机系统。 虚拟现实、增强现实和远程呈现增强或取代我们的现实体验。 最后是机器人技术,它为计算机提供了物理体现,从 Roomba 到试图制造像人类一样的智能机器。
So that is my Map of Computer Science, a field that is still developing as fast as it ever has despite that fact that the underlying hardware is hitting some hard limits as we struggle to miniaturise transistors any more, so lots of people are working on other kinds of computers to try and overcome this problem. Computers have had an absolutely huge impact on human society and so it is going to be interesting to see where this technology goes in the next one hundred years. Who knows, perhaps one day, we'll all be computers.
这就是我的计算机科学地图,这个领域仍在以前所未有的速度发展,尽管事实上,随着我们努力使晶体管小型化,底层硬件正在达到一些硬性限制,所以很多人正在研究其他领域各种计算机试图克服这个问题。 计算机对人类社会产生了绝对巨大的影响,因此看看这项技术在未来一百年的发展将会很有趣。 也许有一天,我们都会成为计算机。

英语学习时间:(English Study Time)
Computer Science Terms:
Theoretical Computer Science 理论计算机科学
Turing machine 图灵机,Lambda calculus Lambda 演算,Computability Theory 可计算性理论,algorithm 算法,Information theory 信息论,coding theory 编码理论,Cryptography 密码学,Logic 逻辑, Graph Theory 图论, Computational Geometry 计算几何, Automata Theory 自动机理论, Quantum Computation 量子计算, Parallel Programming 并行编程, Formal Methods 形式化方法, Datastructures 数据结构,
Computer Engineering 计算机工程
scheduler 调度器、programming language 编程语言,compiler 编译器,operating system 操作系统,software engineering 软件工程,
Application 应用
optimisation problem 最优化问题,Boolean Satisfiability Problem 布尔可满足性问题/SAT问题,Artificial Intelligence 人工智能,machine learning 机器学习,computer vision 机器视觉,Natural language processing自然语言处理,Big data 大数据,Hacking黑客,Computational Science 计算科学,Super Computing 超算,Human Computer Interaction人机交互,Virtual reality 虚拟现实, Augmented Reality 增强现实,Teleprescence 远程呈现增强,Robotics 机器人

PS:
1)《Map of Computer Science》,Youtube Linkage,https://www.youtube.com/watch?v=SzJ46YA_RaA&t=541s
3)《Eight predictions about 2030》,world economic forum, https://cn.weforum.org/agenda/2016/11/2030/
【版权声明】本文为华为云社区用户翻译文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容, 举报邮箱:cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。