知识共享许可协议
本作品采用知识共享署名-非商业性使用 3.0 未本地化版本许可协议进行许可。

Node.js v0.10.18 手册 & 文档


目录

关于本文档#

The goal of this documentation is to comprehensively explain the Node.js API, both from a reference as well as a conceptual point of view. Each section describes a built-in module or high-level concept.

本文档的目标是从参考和概念的角度全面解释 Node.js 的 API,每章节描述一个内置模块或高级概念。

Where appropriate, property types, method arguments, and the arguments provided to event handlers are detailed in a list underneath the topic heading.

在某些情况下,属性类型、方法参数以及事件处理过程(handler)参数 会被列在主标题下的列表中。

Every .html document has a corresponding .json document presenting the same information in a structured manner. This feature is experimental, and added for the benefit of IDEs and other utilities that wish to do programmatic things with the documentation.

每一个 .html 文件都对应一份内容相同的结构化 .json 文档。这个特性现在还是实验性质的,希望能够为一些需要对文档进行操作的IDE或者其他工具提供帮助。

Every .html and .json file is generated based on the corresponding .markdown file in the doc/api/ folder in node's source tree. The documentation is generated using the tools/doc/generate.js program. The HTML template is located at doc/template.html.

每个 .html.json 文件都是基于源码的 doc/api/ 目录下的 .markdown 文件生成的。本文档使用 tools/doc/generate.js 这个程序来生产的。 HTML 模板文件为 doc/template.html

稳定度#

Throughout the documentation, you will see indications of a section's stability. The Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Some are so proven, and so relied upon, that they are unlikely to ever change at all. Others are brand new and experimental, or known to be hazardous and in the process of being redesigned.

在文档中,您可以了解每一个小节的稳定性。Node.js的API会有一些小的改变,当它成熟的时候,会有些部分相比另外一些来说更加可靠。有一部分接受过严格验证,被大量依赖的API几乎是不会改变的。也有一些是新增的、实验性的或者因被证实具有危险性而在重新设计中。

The stability indices are as follows:

稳定度定义如下

稳定度: 5 - 已锁定
除非发现严重缺陷,该代码不会被更改。请不要对此区域提出更改,更改提议将被拒绝。

JSON 输出#

稳定度: 1 - 实验性

Every HTML file in the markdown has a corresponding JSON file with the same data.

每个通过 markdown 生成的 HTML 文件都对应于一个具有相同数据的 JSON 文件。

This feature is new as of node v0.6.12. It is experimental.

该特性引入于 node v0.6.12。当前是测试性功能。

概述#

An example of a web server written with Node which responds with 'Hello World':

一个输出 “Hello World” 的简单 Web 服务器例子:

console.log('服务器已运行,请打开 http://127.0.0.1:8124/');

To run the server, put the code into a file called example.js and execute it with the node program

要运行这个服务器,先将程序保存为文件 “example.js”,并使用 node 命令来执行:

> node example.js
服务器已运行,请打开 http://127.0.0.1:8124/

All of the examples in the documentation can be run similarly.

所有的文档中的例子均使用相同的方式运行。

全局对象#

These objects are available in all modules. Some of these objects aren't actually in the global scope but in the module scope - this will be noted.

这些对象在所有模块中都是可用的。有些对象实际上并非在全局作用域内而是在模块作用域内——这种情况在以下文档中会特别指出。

global#

  • {Object} The global namespace object.

  • {Object} 全局命名空间对象。

In browsers, the top-level scope is the global scope. That means that in browsers if you're in the global scope var something will define a global variable. In Node this is different. The top-level scope is not the global scope; var something inside a Node module will be local to that module.

在浏览器中,顶级作用域就是全局作用域。这就是说,在浏览器中,如果当前是在全局作用域内,var something将会声明一个全局变量。在Node中则不同。顶级作用域并非全局作用域,在Node模块里的var something只属于那个模块。

process#

  • {Object}

  • {Object}

The process object. See the process object section.

进程对象。见 进程对象章节。

console#

  • {Object}

  • {Object}

Used to print to stdout and stderr. See the console section.

用于打印标准输出和标准错误。见控制台章节。

类: Buffer#

  • {Function}

  • {Function}

Used to handle binary data. See the buffer section

用于处理二进制数据。见Buffer章节。

require()#

  • {Function}

  • {Function}

To require modules. See the Modules section. require isn't actually a global but rather local to each module.

引入模块。见Modules章节。require实际上并非全局的而是各个模块本地的。

require.resolve()#

Use the internal require() machinery to look up the location of a module, but rather than loading the module, just return the resolved filename.

使用内部的require()机制查找模块的位置,但不加载模块,只返回解析过的模块文件路径。

require.cache#

  • Object

  • Object

Modules are cached in this object when they are required. By deleting a key value from this object, the next require will reload the module.

模块在引入时会缓存到该对象。通过删除该对象的键值,下次调用require时会重新加载相应模块。

require.extensions#

稳定度:0 - 已废弃
  • {Object}

  • {Object}

Instruct require on how to handle certain file extensions.

指导require方法如何处理特定的文件扩展名。

Process files with the extension .sjs as .js:

.sjs文件作为.js文件处理:

require.extensions['.sjs'] = require.extensions['.js'];

Deprecated In the past, this list has been used to load non-JavaScript modules into Node by compiling them on-demand. However, in practice, there are much better ways to do this, such as loading modules via some other Node program, or compiling them to JavaScript ahead of time.

已废弃 之前,该列表用于按需编译非JavaScript模块并加载进Node。然而,实践中有更好的方式实现该功能,如通过其他Node程序加载模块,或提前将他们编译成JavaScript代码。

Since the Module system is locked, this feature will probably never go away. However, it may have subtle bugs and complexities that are best left untouched.

由于模块系统的API已锁定,该功能可能永远不会去掉。改动它可能会产生细微的错误和复杂性,所以最好保持不变。

__filename#

  • {String}

  • {String}

The filename of the code being executed. This is the resolved absolute path of this code file. For a main program this is not necessarily the same filename used in the command line. The value inside a module is the path to that module file.

当前所执行代码文件的文件路径。这是该代码文件经过解析后的绝对路径。对于主程序来说,这和命令行中使用的文件路径未必是相同的。在模块中此变量值是该模块文件的路径。

Example: running node example.js from /Users/mjr

例子:在/Users/mjr下运行node example.js

console.log(__filename);
// /Users/mjr/example.js

__filename isn't actually a global but rather local to each module.

__filename实际上并非全局的而是各个模块本地的。

__dirname#

  • {String}

  • {String}

The name of the directory that the currently executing script resides in.

当前执行脚本所在目录的目录名。

Example: running node example.js from /Users/mjr

例子:在/Users/mjr下运行node example.js

console.log(__dirname);
// /Users/mjr

__dirname isn't actually a global but rather local to each module.

__dirname实际上并非全局的而是各个模块本地的。

module#

  • {Object}

  • {Object}

A reference to the current module. In particular module.exports is the same as the exports object. module isn't actually a global but rather local to each module.

当前模块的引用。特别地,module.exportsexports指向同一个对象。module实际上并非全局的而是各个模块本地的。

See the module system documentation for more information.

详情可见模块系统文档

exports#

A reference to the module.exports object which is shared between all instances of the current module and made accessible through require(). See module system documentation for details on when to use exports and when to use module.exports. exports isn't actually a global but rather local to each module.

module.exports对象的引用,该对象被当前模块的所有实例所共享,通过require()可访问该对象。 何时使用exports以及何时使用module.exports的详情可参见模块系统文档exports实际上并非全局的而是各个模块本地的。

See the module system documentation for more information.

详情可见模块系统文档

See the module section for more information.

关于模块系统的更多信息可参见模块

setTimeout(cb, ms)#

Run callback cb after at least ms milliseconds. The actual delay depends on external factors like OS timer granularity and system load.

至少ms毫秒后调用回调cb。实际延迟取决于外部因素,如操作系统定时器粒度及系统负载。

The timeout must be in the range of 1-2,147,483,647 inclusive. If the value is outside that range, it's changed to 1 millisecond. Broadly speaking, a timer cannot span more than 24.8 days.

超时值必须在1-2147483647的范围内(包含1和2147483647)。如果该值超出范围,则该值被当作1毫秒处理。一般来说,一个定时器不能超过24.8天。

Returns an opaque value that represents the timer.

返回一个代表该定时器的句柄值。

clearTimeout(t)#

Stop a timer that was previously created with setTimeout(). The callback will not execute.

停止一个之前通过setTimeout()创建的定时器。回调不会再被执行。

setInterval(cb, ms)#

Run callback cb repeatedly every ms milliseconds. Note that the actual interval may vary, depending on external factors like OS timer granularity and system load. It's never less than ms but it may be longer.

每隔ms毫秒重复调用回调cb。注意,取决于外部因素,如操作系统定时器粒度及系统负载,实际间隔可能会改变。它不会少于ms但可能比ms长。

The interval must be in the range of 1-2,147,483,647 inclusive. If the value is outside that range, it's changed to 1 millisecond. Broadly speaking, a timer cannot span more than 24.8 days.

间隔值必须在1-2147483647的范围内(包含1和2147483647)。如果该值超出范围,则该值被当作1毫秒处理。一般来说,一个定时器不能超过24.8天。

Returns an opaque value that represents the timer.

返回一个代表该定时器的句柄值。

clearInterval(t)#

Stop a timer that was previously created with setInterval(). The callback will not execute.

停止一个之前通过setInterval()创建的定时器。回调不会再被执行。

The timer functions are global variables. See the timers section.

定制器函数是全局变量。见定时器章节。

控制台#

稳定度: 4 - 冻结
  • {Object}

  • {Object}

For printing to stdout and stderr. Similar to the console object functions provided by most web browsers, here the output is sent to stdout or stderr.

用于向 stdout 和 stderr 打印字符。类似于大部分 Web 浏览器提供的 console 对象函数,在这里则是输出到 stdout 或 stderr。

The console functions are synchronous when the destination is a terminal or a file (to avoid lost messages in case of premature exit) and asynchronous when it's a pipe (to avoid blocking for long periods of time).

当输出目标是一个终端或者文件时,console函数是同步的(为了防止过早退出时丢失信息).当输出目标是一个管道时它们是异步的(防止阻塞过长时间).

That is, in the following example, stdout is non-blocking while stderr is blocking:

也就是说,在下面的例子中,stdout 是非阻塞的,而 stderr 则是阻塞的。

$ node script.js 2> error.log | tee info.log

In daily use, the blocking/non-blocking dichotomy is not something you should worry about unless you log huge amounts of data.

在日常使用中,您不需要太担心阻塞/非阻塞的差别,除非您需要记录大量数据。

console.log([data], [...])#

Prints to stdout with newline. This function can take multiple arguments in a printf()-like way. Example:

向 stdout 打印并新起一行。这个函数可以像 printf() 那样接受多个参数,例如:

console.log('count: %d', count);

If formatting elements are not found in the first string then util.inspect is used on each argument. See util.format() for more information.

如果在第一个字符串中没有找到格式化元素,那么 util.inspect 将被应用到各个参数。详见 util.format()

console.info([data], [...])#

Same as console.log.

console.log

console.error([data], [...])#

Same as console.log but prints to stderr.

console.log,但输出到 stderr。

console.warn([data], [...])#

Same as console.error.

console.error

console.dir(obj)#

Uses util.inspect on obj and prints resulting string to stdout. This function bypasses any custom inspect() function on obj.

obj 使用 util.inspect 并将结果字符串输出到 stdout。这个函数会忽略 obj 上的任何自定义 inspect()

console.time(label)#

Mark a time.

标记一个时间点。

console.timeEnd(label)#

Finish timer, record output. Example:

结束计时器,记录输出。例如:

console.time('100-elements');
for (var i = 0; i < 100; i++) {
  ;
}
console.timeEnd('100-elements');

console.trace(label)#

Print a stack trace to stderr of the current position.

打印当前位置的栈跟踪到 stderr。

console.assert(expression, [message])#

Same as assert.ok() where if the expression evaluates as false throw an AssertionError with message.

assert.ok() 相同,如果 expression 执行结果为 false 则抛出一个带上 message 的 AssertionError。

定时器#

稳定度: 5 - 已锁定

All of the timer functions are globals. You do not need to require() this module in order to use them.

所有的定时器函数都是全局变量. 你使用这些函数时不需要 require()模块.

setTimeout(callback, delay, [arg], [...])#

To schedule execution of a one-time callback after delay milliseconds. Returns a timeoutId for possible use with clearTimeout(). Optionally you can also pass arguments to the callback.

调度 delay 毫秒后的一次 callback 执行。返回一个可能被 clearTimeout() 用到的 timeoutId。可选地,您还能给回调传入参数。

It is important to note that your callback will probably not be called in exactly delay milliseconds - Node.js makes no guarantees about the exact timing of when the callback will fire, nor of the ordering things will fire in. The callback will be called as close as possible to the time specified.

请务必注意,您的回调有可能不会在准确的 delay 毫秒后被调用。Node.js 不保证回调被触发的精确时间和顺序。回调会在尽可能接近所指定时间上被调用。

clearTimeout(timeoutId)#

Prevents a timeout from triggering.

阻止一个 timeout 被触发。

setInterval(callback, delay, [arg], [...])#

To schedule the repeated execution of callback every delay milliseconds. Returns a intervalId for possible use with clearInterval(). Optionally you can also pass arguments to the callback.

调度每隔 delay 毫秒执行一次的 callback。返回一个可能被 clearInterval() 用到的 intervalId。可选地,您还能给回调传入参数。

clearInterval(intervalId)#

Stops a interval from triggering.

停止一个 interval 的触发。

unref()#

The opaque value returned by setTimeout and setInterval also has the method timer.unref() which will allow you to create a timer that is active but if it is the only item left in the event loop won't keep the program running. If the timer is already unrefd calling unref again will have no effect.

setTimeoutsetInterval 所返回的值同时具有 timer.unref() 方法,允许您创建一个活动的、但当它是事件循环中仅剩的项目时不会保持程序运行的定时器。如果定时器已被 unref,再次调用 unref 不会产生其它影响。

In the case of setTimeout when you unref you create a separate timer that will wakeup the event loop, creating too many of these may adversely effect event loop performance -- use wisely.

setTimeout 的情景中当您 unref 您会创建另一个定时器,并唤醒事件循环。创建太多这种定时器可能会影响事件循环的性能,慎用。

ref()#

If you had previously unref()d a timer you can call ref() to explicitly request the timer hold the program open. If the timer is already refd calling ref again will have no effect.

如果您之前 unref() 了一个定时器,您可以调用 ref() 来明确要求定时器让程序保持运行。如果定时器已被 ref 那么再次调用 ref 不会产生其它影响。

setImmediate(callback, [arg], [...])#

To schedule the "immediate" execution of callback after I/O events callbacks and before setTimeout and setInterval . Returns an immediateId for possible use with clearImmediate(). Optionally you can also pass arguments to the callback.

调度在所有 I/O 事件回调之后、setTimeoutsetInterval 之前“立即”执行 callback。返回一个可能被 clearImmediate() 用到的 immediateId。可选地,您还能给回调传入参数。

Callbacks for immediates are queued in the order in which they were created. The entire callback queue is processed every event loop iteration. If you queue an immediate from a inside an executing callback that immediate won't fire until the next event loop iteration.

immediate 的回调以它们创建的顺序被加入队列。整个回调队列会在每个事件循环迭代中被处理。如果您在一个正被执行的回调中添加 immediate,那么这个 immediate 在下一个事件循环迭代之前都不会被触发。

clearImmediate(immediateId)#

Stops an immediate from triggering.

停止一个 immediate 的触发。

Modules#

稳定度: 5 - 已锁定

Node has a simple module loading system. In Node, files and modules are in one-to-one correspondence. As an example, foo.js loads the module circle.js in the same directory.

Node有一个简易的模块加载系统。在node中,文件和模块是一一对应的。下面示例是foo.js加载同一目录下的circle.js

The contents of foo.js:

foo.js的内容:

var circle = require('./circle.js');
console.log( 'The area of a circle of radius 4 is '
           + circle.area(4));

The contents of circle.js:

circle.js的内容:

var PI = Math.PI;
exports.area = function (r) {
    return PI * r * r;
};
exports.circumference = function (r) {
    return 2 * PI * r;
};

The module circle.js has exported the functions area() and circumference(). To export an object, add to the special exports object.

circle.js模块输出了area()circumference()两个函数。要输出某个对象,把它加到exports这个特殊对象下即可。

Note that exports is a reference to module.exports making it suitable for augmentation only. If you are exporting a single item such as a constructor you will want to use module.exports directly instead.

注意,exportsmodule.exports的一个引用,只是为了用起来方便。当你想输出的是例如构造函数这样的单个项目,那么需要使用module.exports

// 正确输出构造函数
module.exports = MyConstructor;

Variables local to the module will be private. In this example the variable PI is private to circle.js.

模块内的本地变量是私有的。在这里例子中,PI这个变量就是circle.js私有的。

The module system is implemented in the require("module") module.

模块系统的实现在require("module")中。

循环#

When there are circular require() calls, a module might not be done being executed when it is returned.

当存在循环的require()调用时,一个模块可能在返回时并不会被执行。

Consider this situation:

考虑这样一种情形:

a.js:

a.js:

console.log('a starting');
exports.done = false;
var b = require('./b.js');
console.log('in a, b.done = %j', b.done);
exports.done = true;
console.log('a done');

b.js:

b.js:

console.log('b starting');
exports.done = false;
var a = require('./a.js');
console.log('in b, a.done = %j', a.done);
exports.done = true;
console.log('b done');

main.js:

main.js:

console.log('main starting');
var a = require('./a.js');
var b = require('./b.js');
console.log('in main, a.done=%j, b.done=%j', a.done, b.done);

When main.js loads a.js, then a.js in turn loads b.js. At that point, b.js tries to load a.js. In order to prevent an infinite loop an unfinished copy of the a.js exports object is returned to the b.js module. b.js then finishes loading, and its exports object is provided to the a.js module.

首先main.js加载a.js,接着a.js又去加载b.js。这时,b.js会尝试去加载a.js。为了防止无限的循环,a.js会返回一个unfinished copyb.js。然后b.js就会停止加载,并将其exports对象返回给a.js模块。

By the time main.js has loaded both modules, they're both finished. The output of this program would thus be:

这样main.js就把这两个模块都加载完成了。这段程序的输出如下:

$ node main.js
main starting
a starting
b starting
in b, a.done = false
b done
in a, b.done = true
a done
in main, a.done=true, b.done=true

If you have cyclic module dependencies in your program, make sure to plan accordingly.

如果你的程序中有循环的模块依赖,请确保工作正常。

核心模块#

Node has several modules compiled into the binary. These modules are described in greater detail elsewhere in this documentation.

Node中有一些模块是编译成二进制的。这些模块在本文档的其他地方有更详细的描述。

The core modules are defined in node's source in the lib/ folder.

核心模块定义在node源代码的lib/目录下。

Core modules are always preferentially loaded if their identifier is passed to require(). For instance, require('http') will always return the built in HTTP module, even if there is a file by that name.

require()总是会优先加载核心模块。例如,require('http')总是返回编译好的HTTP模块,而不管是否有这个名字的文件。

文件模块#

If the exact filename is not found, then node will attempt to load the required filename with the added extension of .js, .json, and then .node.

如果按文件名没有查找到,那么node会添加 .js.json后缀名,再尝试加载,如果还是没有找到,最后会加上.node的后缀名再次尝试加载。

.js files are interpreted as JavaScript text files, and .json files are parsed as JSON text files. .node files are interpreted as compiled addon modules loaded with dlopen.

.js 会被解析为Javascript纯文本文件,.json 会被解析为JSON格式的纯文本文件. .node 则会被解析为编译后的插件模块,由dlopen进行加载。

A module prefixed with '/' is an absolute path to the file. For example, require('/home/marco/foo.js') will load the file at /home/marco/foo.js.

模块以'/'为前缀,则表示绝对路径。例如,require('/home/marco/foo.js') ,加载的是/home/marco/foo.js这个文件。

A module prefixed with './' is relative to the file calling require(). That is, circle.js must be in the same directory as foo.js for require('./circle') to find it.

模块以'./'为前缀,则路径是相对于调用require()的文件。 也就是说,circle.js必须和foo.js在同一目录下,require('./circle')才能找到。

Without a leading '/' or './' to indicate a file, the module is either a "core module" or is loaded from a node_modules folder.

当没有以'/'或者'./'来指向一个文件时,这个模块要么是"核心模块",要么就是从node_modules文件夹加载的。

If the given path does not exist, require() will throw an Error with its code property set to 'MODULE_NOT_FOUND'.

如果指定的路径不存在,require()会抛出一个code属性为'MODULE_NOT_FOUND'的错误。

node_modules文件夹中加载#

If the module identifier passed to require() is not a native module, and does not begin with '/', '../', or './', then node starts at the parent directory of the current module, and adds /node_modules, and attempts to load the module from that location.

如果require()中的模块名不是一个本地模块,也没有以'/', '../', 或是 './'开头,那么node会从当前模块的父目录开始,尝试在它的/node_modules文件夹里加载相应模块。

If it is not found there, then it moves to the parent directory, and so on, until the root of the tree is reached.

如果没有找到,那么就再向上移动到父目录,直到到达顶层目录位置。

For example, if the file at '/home/ry/projects/foo.js' called require('bar.js'), then node would look in the following locations, in this order:

例如,如果位于'/home/ry/projects/foo.js'的文件调用了require('bar.js'),那么node查找的位置依次为:

  • /home/ry/projects/node_modules/bar.js
  • /home/ry/node_modules/bar.js
  • /home/node_modules/bar.js
  • /node_modules/bar.js

  • /home/ry/projects/node_modules/bar.js

  • /home/ry/node_modules/bar.js
  • /home/node_modules/bar.js
  • /node_modules/bar.js

This allows programs to localize their dependencies, so that they do not clash.

这就要求程序员应尽量把依赖放在就近的位置,以防崩溃。

Folders as Modules#

It is convenient to organize programs and libraries into self-contained directories, and then provide a single entry point to that library. There are three ways in which a folder may be passed to require() as an argument.

可以把程序和库放到一个单独的文件夹里,并提供单一入口来指向它。有三种方法,使一个文件夹可以作为require()的参数来加载。

The first is to create a package.json file in the root of the folder, which specifies a main module. An example package.json file might look like this:

首先是在文件夹的根目录创建一个叫做package.json的文件,它需要指定一个main模块。下面是一个package.json文件的示例。

{ "name" : "some-library",
  "main" : "./lib/some-library.js" }

If this was in a folder at ./some-library, then require('./some-library') would attempt to load ./some-library/lib/some-library.js.

示例中这个文件,如果是放在./some-library目录下面,那么require('./some-library')就将会去加载./some-library/lib/some-library.js

This is the extent of Node's awareness of package.json files.

This is the extent of Node's awareness of package.json files.

If there is no package.json file present in the directory, then node will attempt to load an index.js or index.node file out of that directory. For example, if there was no package.json file in the above example, then require('./some-library') would attempt to load:

如果目录里没有package.json这个文件,那么node就会尝试去加载这个路径下的index.js或者index.node。例如,若上面例子中,没有package.json,那么require('./some-library')就将尝试加载下面的文件:

  • ./some-library/index.js
  • ./some-library/index.node

  • ./some-library/index.js

  • ./some-library/index.node

Caching#

Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.

模块在第一次加载后会被缓存。这意味着(类似其他缓存)每次调用require('foo')的时候都会返回同一个对象,当然,必须是每次都解析到同一个文件。

Multiple calls to require('foo') may not cause the module code to be executed multiple times. This is an important feature. With it, "partially done" objects can be returned, thus allowing transitive dependencies to be loaded even when they would cause cycles.

Multiple calls to require('foo') may not cause the module code to be executed multiple times. This is an important feature. With it, "partially done" objects can be returned, thus allowing transitive dependencies to be loaded even when they would cause cycles.

If you want to have a module execute code multiple times, then export a function, and call that function.

如果你希望一个模块多次执行,那么就输出一个函数,然后调用这个函数。

Module Caching Caveats#

Modules are cached based on their resolved filename. Since modules may resolve to a different filename based on the location of the calling module (loading from node_modules folders), it is not a guarantee that require('foo') will always return the exact same object, if it would resolve to different files.

模块的缓存是依赖于解析后的文件名。由于随着调用的位置不同,可能解析到不同的文件(比如需从node_modules文件夹加载的情况),所以,如果解析到其他文件时,就不能保证require('foo')总是会返回确切的同一对象。

The module Object#

  • {Object}

  • {Object}

In each module, the module free variable is a reference to the object representing the current module. In particular module.exports is accessible via the exports module-global. module isn't actually a global but rather local to each module.

在每一个模块中,变量 module 是一个代表当前模块的对象的引用。 特别地,module.exports 可以通过全局模块对象 exports 获取到。 module 不是事实上的全局对象,而更像是每个模块内部的。

module.exports#

  • Object

  • Object

The module.exports object is created by the Module system. Sometimes this is not acceptable, many want their module to be an instance of some class. To do this assign the desired export object to module.exports. For example suppose we were making a module called a.js

module.exports 对象是通过模块系统产生的。有时候这是难以接受的,许多人想让他们的模块是某个类的实例。 因此,将要导出的对象赋值给 module.exports 。例如,假设我们有一个模块称之为 a.js

// Do some work, and after some time emit
// the 'ready' event from the module itself.
setTimeout(function() {
  module.exports.emit('ready');
}, 1000);

Then in another file we could do

那么,在另一个文件中我们可以这样写

var a = require('./a');
a.on('ready', function() {
  console.log('module a is ready');
});

Note that assignment to module.exports must be done immediately. It cannot be done in any callbacks. This does not work:

Note that assignment to module.exports must be done immediately. It cannot be done in any callbacks. This does not work:

x.js:

x.js:

setTimeout(function() {
  module.exports = { a: "hello" };
}, 0);

y.js:

y.js:

var x = require('./x');
console.log(x.a);

module.require(id)#

  • id String
  • Return: Object module.exports from the resolved module

  • id String

  • Return: Object 已解析模块的 module.exports

The module.require method provides a way to load a module as if require() was called from the original module.

module.require 方法提供了一种像 require() 一样从最初的模块加载一个模块的方法。

Note that in order to do this, you must get a reference to the module object. Since require() returns the module.exports, and the module is typically only available within a specific module's code, it must be explicitly exported in order to be used.

注意,为了这样做,你必须取得一个对 module 对象的引用。 require() 返回 module.exports,并且 module 是一个典型的只能在特定模块作用域内有效的变量,如果要使用它,就必须明确的导出。

module.id#

  • String

  • String

The identifier for the module. Typically this is the fully resolved filename.

用于区别模块的标识符。通常是完全解析后的文件名。

module.filename#

  • String

  • String

The fully resolved filename to the module.

模块完全解析后的文件名。

module.loaded#

  • Boolean

  • Boolean

Whether or not the module is done loading, or is in the process of loading.

不论该模块是否加载完毕,或者正在加载的过程中。

module.parent#

  • Module Object

  • Module Object

The module that required this one.

引入这个模块的模块。

module.children#

  • Array

  • Array

The module objects required by this one.

这个模块引入的所有模块对象。

总体来说...#

To get the exact filename that will be loaded when require() is called, use the require.resolve() function.

为了获取调用 require 加载的确切的文件名,使用 require.resolve() 函数。

Putting together all of the above, here is the high-level algorithm in pseudocode of what require.resolve does:

综上所述,下面用伪代码的高级算法形式表达了 require.resolve 是如何工作的:

NODE_MODULES_PATHS(START)
1. let PARTS = path split(START)
2. let ROOT = index of first instance of "node_modules" in PARTS, or 0
3. let I = count of PARTS - 1
4. let DIRS = []
5. while I > ROOT,
   a. if PARTS[I] = "node_modules" CONTINUE
   c. DIR = path join(PARTS[0 .. I] + "node_modules")
   b. DIRS = DIRS + DIR
   c. let I = I - 1
6. return DIRS

从全局文件夹加载#

If the NODE_PATH environment variable is set to a colon-delimited list of absolute paths, then node will search those paths for modules if they are not found elsewhere. (Note: On Windows, NODE_PATH is delimited by semicolons instead of colons.)

如果 NODE_PATH 环境变量设置为一个以冒号分割的绝对路径的列表, 找不到模块时 node 将会从这些路径中搜索模块。 (注意:在 windows 操作系统上,NODE_PATH 是以分号间隔的)

Additionally, node will search in the following locations:

此外,node 将会搜索以下地址:

  • 1: $HOME/.node_modules
  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

  • 1: $HOME/.node_modules

  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

Where $HOME is the user's home directory, and $PREFIX is node's configured node_prefix.

$HOME 是用户的主目录,$PREFIX 是 node 里配置的 node_prefix

These are mostly for historic reasons. You are highly encouraged to place your dependencies locally in node_modules folders. They will be loaded faster, and more reliably.

这些大多是由于历史原因。强烈建议读者将所依赖的模块放到 node_modules 文件夹里。 这样加载的更快也更可靠。

访问主模块#

When a file is run directly from Node, require.main is set to its module. That means that you can determine whether a file has been run directly by testing

当 Node 直接运行一个文件时,require.main 就被设置为它的 module 。 也就是说你可以判断一个文件是否是直接被运行的

require.main === module

For a file foo.js, this will be true if run via node foo.js, but false if run by require('./foo').

对于一个 foo.js 文件,如果通过 node foo.js 运行是 true ,但是通过 require('./foo') 运行却是 false

Because module provides a filename property (normally equivalent to __filename), the entry point of the current application can be obtained by checking require.main.filename.

因为 module 提供了一个 filename 属性(通常等于 __filename), 所以当前程序的入口点可以通过 require.main.filename 来获取。

附录: 包管理技巧#

The semantics of Node's require() function were designed to be general enough to support a number of sane directory structures. Package manager programs such as dpkg, rpm, and npm will hopefully find it possible to build native packages from Node modules without modification.

Node 的 require() 函数的语义被设计的足够通用化,以支持各种常规目录结构。 包管理程序如 dpkg,rpm 和 npm 将不用修改就能够从 Node 模块构建本地包。

Below we give a suggested directory structure that could work:

接下来我们将给你一个可行的目录结构建议:

Let's say that we wanted to have the folder at /usr/lib/node/<some-package>/<some-version> hold the contents of a specific version of a package.

假设我们希望将一个包的指定版本放在 /usr/lib/node/<some-package>/<some-version> 目录中。

Packages can depend on one another. In order to install package foo, you may have to install a specific version of package bar. The bar package may itself have dependencies, and in some cases, these dependencies may even collide or form cycles.

包可以依赖于其他包。为了安装包 foo,可能需要安装包 bar 的一个指定版本。 包 bar 也可能有依赖关系,在某些情况下依赖关系可能发生冲突或者形成循环。

Since Node looks up the realpath of any modules it loads (that is, resolves symlinks), and then looks for their dependencies in the node_modules folders as described above, this situation is very simple to resolve with the following architecture:

因为 Node 会查找它所加载的模块的真实路径(也就是说会解析符号链接), 然后按照上文描述的方式在 node_modules 目录中寻找依赖关系,这种情形跟以下体系结构非常相像:

  • /usr/lib/node/foo/1.2.3/ - Contents of the foo package, version 1.2.3.
  • /usr/lib/node/bar/4.3.2/ - Contents of the bar package that foo depends on.
  • /usr/lib/node/foo/1.2.3/node_modules/bar - Symbolic link to /usr/lib/node/bar/4.3.2/.
  • /usr/lib/node/bar/4.3.2/node_modules/* - Symbolic links to the packages that bar depends on.

  • /usr/lib/node/foo/1.2.3/ - foo 包 1.2.3 版本的内容

  • /usr/lib/node/bar/4.3.2/ - foo 包所依赖的 bar 包的内容
  • /usr/lib/node/foo/1.2.3/node_modules/bar - 指向 /usr/lib/node/bar/4.3.2/ 的符号链接
  • /usr/lib/node/bar/4.3.2/node_modules/* - 指向 bar 包所依赖的包的符号链接

Thus, even if a cycle is encountered, or if there are dependency conflicts, every module will be able to get a version of its dependency that it can use.

因此即便存在循环依赖或依赖冲突,每个模块还是可以获得他所依赖的包的一个可用版本。

When the code in the foo package does require('bar'), it will get the version that is symlinked into /usr/lib/node/foo/1.2.3/node_modules/bar. Then, when the code in the bar package calls require('quux'), it'll get the version that is symlinked into /usr/lib/node/bar/4.3.2/node_modules/quux.

当 foo 包中的代码调用 require('bar'),将获得符号链接 /usr/lib/node/foo/1.2.3/node_modules/bar 指向的版本。 然后,当 bar 包中的代码调用 require('queue'),将会获得符号链接 /usr/lib/node/bar/4.3.2/node_modules/quux 指向的版本。

Furthermore, to make the module lookup process even more optimal, rather than putting packages directly in /usr/lib/node, we could put them in /usr/lib/node_modules/<name>/<version>. Then node will not bother looking for missing dependencies in /usr/node_modules or /node_modules.

此外,为了进一步优化模块搜索过程,不要将包直接放在 /usr/lib/node 目录中,而是将它们放在 /usr/lib/node_modules/<name>/<version> 目录中。 这样在依赖的包找不到的情况下,就不会一直寻找 /usr/node_modules 目录或 /node_modules 目录了。

In order to make modules available to the node REPL, it might be useful to also add the /usr/lib/node_modules folder to the $NODE_PATH environment variable. Since the module lookups using node_modules folders are all relative, and based on the real path of the files making the calls to require(), the packages themselves can be anywhere.

为了使模块在 node 的 REPL 中可用,你可能需要将 /usr/lib/node_modules 目录加入到 $NODE_PATH 环境变量中。 由于在 node_modules 目录中搜索模块使用的是相对路径,基于调用 require() 的文件所在真实路径,因此包本身可以放在任何位置。

Addons插件#

Addons are dynamically linked shared objects. They can provide glue to C and C++ libraries. The API (at the moment) is rather complex, involving knowledge of several libraries:

Addons插件就是动态连接库。它类似胶水,将c、c++和Node粘贴起来。它的API(目前来说)相当复杂,涉及到了几个类库的知识。

  • V8 JavaScript, a C++ library. Used for interfacing with JavaScript: creating objects, calling functions, etc. Documented mostly in the v8.h header file (deps/v8/include/v8.h in the Node source tree), which is also available online.

  • V8 JavaScript引擎,一个 C++ 类库. 用于和JavaScript进行交互的接口。 创建对象, 调用函数等. 文档大部分在这里: v8.h 头文件 (deps/v8/include/v8.h在Node源代码目录里), 也有可用的线上文档 线上. (译者:想要学习c++的addons插件编写,必须先了解v8的接口)

  • libuv, C event loop library. Anytime one needs to wait for a file descriptor to become readable, wait for a timer, or wait for a signal to be received one will need to interface with libuv. That is, if you perform any I/O, libuv will need to be used.

  • libuv, C语言编写的事件循环类库。任何时候需要等待一个文件描述符变为可读状态,等待一个定时器,或者等待一个接受信号都需要使用libuv类库的接口。也就是说,如果你执行任何I/O操作,libuv类库将会被用到。

  • Internal Node libraries. Most importantly is the node::ObjectWrap class which you will likely want to derive from.

  • 内部 Node 类库.最重要的接口就是 node::ObjectWrap 类,这个类你应该是最可能想要派生的。

  • Others. Look in deps/ for what else is available.

  • 其他.请参阅 deps/ 获得更多可用类库。

Node statically compiles all its dependencies into the executable. When compiling your module, you don't need to worry about linking to any of these libraries.

Node 静态编译了所有依赖到它的可执行文件中去了。当编译你的模块时,你不必担心无法连接上述那些类库。 (译者:换而言之,你在编译自己的addons插件时,只管在头部 #include <uv.h>,不必在binding.gyp中声明)

All of the following examples are available for download and may be used as a starting-point for your own Addon.

下面所有的例子都可以下载到: 下载 这或许能成为你学习和创作自己addon插件的起点。

Hello world(世界你好)#

To get started let's make a small Addon which is the C++ equivalent of the following JavaScript code:

作为开始,让我们用编写一个小的addon插件,这个addon插件的c++代码相当于下面的JavaScript代码。

module.exports.hello = function() { return 'world'; };

First we create a file hello.cc:

首先我们创建一个 hello.cc文件:

NODE_MODULE(hello, init)//译者:将addon插件名hello和上述init函数关联输出

Note that all Node addons must export an initialization function:

注意所有Node的addons插件都必须输出一个初始化函数:

void Initialize (Handle<Object> exports);
NODE_MODULE(module_name, Initialize)

There is no semi-colon after NODE_MODULE as it's not a function (see node.h).

NODE_MODULE之后没有分号,因为它不是一个函数(请参阅node.h

The module_name needs to match the filename of the final binary (minus the .node suffix).

这个module_name(模块名)需要和最后编译生成的2进制文件名(减去.node后缀名)相同。

The source code needs to be built into hello.node, the binary Addon. To do this we create a file called binding.gyp which describes the configuration to build your module in a JSON-like format. This file gets compiled by node-gyp.

源代码需要生成在hello.node,这个2进制addon插件中。 需要做到这些,我们要创建一个名为binding.gyp的文件,它描述了创建这个模块的配置,并且它的格式是类似JSON的。 文件将被命令:node-gyp 编译。

{
  "targets": [
    {
      "target_name": "hello", //译者:addon插件名,注意这里的名字必需和上面NODE_MODULE中的一致
      "sources": [ "hello.cc" ]  //译者:这是需要编译的源文件
    }
  ]
}

The next step is to generate the appropriate project build files for the current platform. Use node-gyp configure for that.

下一步是根据当前的操作系统平台,利用node-gyp configure命令,生成合适的项目文件。

Now you will have either a Makefile (on Unix platforms) or a vcxproj file (on Windows) in the build/ directory. Next invoke the node-gyp build command.

现在你会有一个Makefile (在Unix平台) 或者一个 vcxproj file (在Windows上),它们都在build/ 文件夹中. 然后执行命令 node-gyp build进行编译。 (译者:当然你可以执行 node-gyp rebuild一步搞定)

Now you have your compiled .node bindings file! The compiled bindings end up in build/Release/.

现在你已经有了编译好的 .node 文件了,这个编译好的绑定文件会在目录 build/Release/

You can now use the binary addon in a Node project hello.js by pointing require to the recently built hello.node module:

现在你可以使用这个2进制addon插件在Node项目hello.js 中了,通过指明require这个刚刚创建的hello.node模块使用它。

console.log(addon.hello()); // 'world'

Please see patterns below for further information or

https://github.com/arturadib/node-qt for an example in production.

请阅读下面的内容获得更多详情或者访问https://github.com/arturadib/node-qt获取一个生产环境的例子。

Addon patterns(插件方式)#

Below are some addon patterns to help you get started. Consult the online v8 reference for help with the various v8 calls, and v8's Embedder's Guide for an explanation of several concepts used such as handles, scopes, function templates, etc.

下面是一些帮助你开始编写addon插件的方式。参考这个在线的v8 手册用来帮助你调用各种v8接口, 然后是v8的 嵌入式开发向导 ,解释几个概念,如 handles, scopes,function templates等。

In order to use these examples you need to compile them using node-gyp. Create the following binding.gyp file:

为了能跑起来这些例子,你必须用 node-gyp 来编译他们。 创建一个binding.gyp 文件:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [ "addon.cc" ]
    }
  ]
}

In cases where there is more than one .cc file, simply add the file name to the sources array, e.g.:

事实上可以有多个 .cc 文件, 就简单的在 sources 数组里加上即可,例子:

"sources": ["addon.cc", "myexample.cc"]

Now that you have your binding.gyp ready, you can configure and build the addon:

现在你有了你的binding.gyp文件了,你可要开始执行configure 和 build 命令构建你的addon插件了

$ node-gyp configure build

Function arguments(函数参数)#

The following pattern illustrates how to read arguments from JavaScript function calls and return a result. This is the main and only needed source addon.cc:

下面的部分说明了如何从JavaScript的函数调用获得参数然后返回一个值。这是主要的内容并且仅需要源代码addon.cc

NODE_MODULE(addon, Init)

You can test it with the following JavaScript snippet:

你可以使用下面的JavaScript代码片段来测试它

console.log( 'This should be eight:', addon.add(3,5) );

Callbacks(回调)#

You can pass JavaScript functions to a C++ function and execute them from there. Here's addon.cc:

你可以传递JavaScript functions 到一个C++ function 并且执行他们,这里是 addon.cc文件:

NODE_MODULE(addon, Init)

Note that this example uses a two-argument form of Init() that receives the full module object as the second argument. This allows the addon to completely overwrite exports with a single function instead of adding the function as a property of exports.

注意这个例子对Init()使用了两个参数,将完整的 module 对象作为第二个参数传入。这允许addon插件完全的重写 exports,这样就可以用一个函数代替多个函数作为exports的属性了。

To test it run the following JavaScript snippet:

你可以使用下面的JavaScript代码片段来测试它

addon(function(msg){
  console.log(msg); // 'hello world'
});

Object factory(对象工厂)#

You can create and return new objects from within a C++ function with this addon.cc pattern, which returns an object with property msg that echoes the string passed to createObject():

在这个addon.cc文件里用一个c++函数,你可以创建并且返回一个新的对象,这个新的对象拥有一个msg的属性,它的值是通过createObject()方法传入的

NODE_MODULE(addon, Init)

To test it in JavaScript:

在js中测试如下:

var obj1 = addon('hello');
var obj2 = addon('world');
console.log(obj1.msg+' '+obj2.msg); // 'hello world'

Function factory(函数工厂)#

This pattern illustrates how to create and return a JavaScript function that wraps a C++ function:

这次将展示如何创建并返回一个JavaScript function函数,这个函数其实是通过c++包装的。

NODE_MODULE(addon, Init)

To test:

测试它:

var fn = addon();
console.log(fn()); // 'hello world'

Wrapping C++ objects(包装c++对象)#

Here we will create a wrapper for a C++ object/class MyObject that can be instantiated in JavaScript through the new operator. First prepare the main module addon.cc:

这里将创建一个被c++包裹的对象或类MyObject,它是可以在JavaScript中通过new操作符实例化的。 首先我们要准备主要的模块文件addon.cc:

NODE_MODULE(addon, InitAll)

Then in myobject.h make your wrapper inherit from node::ObjectWrap:

然后在myobject.h文件中创建你的包装类,它继承自 node::ObjectWrap:

#endif

And in myobject.cc implement the various methods that you want to expose. Here we expose the method plusOne by adding it to the constructor's prototype:

在文件 myobject.cc 可以实施各种你想要暴露给js的方法。 这里我们暴露方法名为 plusOne给就是,它表示将构造函数的属性加1.

  return scope.Close(Number::New(obj->counter_));
}

Test it with:

测试它:

var obj = new addon.MyObject(10);
console.log( obj.plusOne() ); // 11
console.log( obj.plusOne() ); // 12
console.log( obj.plusOne() ); // 13

Factory of wrapped objects(工厂包装对象)#

This is useful when you want to be able to create native objects without explicitly instantiating them with the new operator in JavaScript, e.g.

这是非常有用的,当你想创建原生的JavaScript对象时,又不想明确的使用JavaScript的new操作符。

var obj = addon.createObject();
// 用上面的方式代替下面的:
// var obj = new addon.Object();

Let's register our createObject method in addon.cc:

让我们注册在 addon.cc 文件中注册createObject方法:

NODE_MODULE(addon, InitAll)

In myobject.h we now introduce the static method NewInstance that takes care of instantiating the object (i.e. it does the job of new in JavaScript):

myobject.h文件中,我们现在介绍静态方法NewInstance,它能够实例化对象(举个例子,它的工作就像是 在JavaScript中的new` 操作符。)

#endif

The implementation is similar to the above in myobject.cc:

这里的处理方式和上面的 myobject.cc很像:

  return scope.Close(Number::New(obj->counter_));
}

Test it with:

测试它:

var obj2 = createObject(20);
console.log( obj2.plusOne() ); // 21
console.log( obj2.plusOne() ); // 22
console.log( obj2.plusOne() ); // 23

Passing wrapped objects around(传递包装的对象)#

In addition to wrapping and returning C++ objects, you can pass them around by unwrapping them with Node's node::ObjectWrap::Unwrap helper function. In the following addon.cc we introduce a function add() that can take on two MyObject objects:

除了包装和返回c++对象以外,你可以传递他们并且通过Node的node::ObjectWrap::Unwrap帮助函数解包装他们。 在下面的addon.cc 文件中,我们介绍了一个函数add(),它能够获取2个MyObject对象。

NODE_MODULE(addon, InitAll)

To make things interesting we introduce a public method in myobject.h so we can probe private values after unwrapping the object:

为了使事情变得有趣,我们在 myobject.h 采用一个公共的方法,所以我们能够在unwrapping解包装对象之后使用私有成员的值。

#endif

The implementation of myobject.cc is similar as before:

myobject.cc文件的处理方式和前面类似

  return scope.Close(instance);
}

Test it with:

测试它:

var obj1 = addon.createObject(10);
var obj2 = addon.createObject(20);
var result = addon.add(obj1, obj2);

console.log(result); // 30


console.log(result); // 30

process#

The process object is a global object and can be accessed from anywhere. It is an instance of EventEmitter.

process对象是一个全局对象,可以在任何地方访问到它。 它是EventEmitter的一个实例。

Exit Codes#

Node will normally exit with a 0 status code when no more async operations are pending. The following status codes are used in other cases:

Node 执行程序正常情况下会返回 0,这也意味着,包括所有“异步”在内的操作都已结束。(笔者注:linux terminal 下使用 echo $? 查看,win cmd 下使用 echo %ERRORLEVEL% 查看)除此之外的其他返回状态如下:

  • 1 Uncaught Fatal Exception - There was an uncaught exception, and it was not handled by a domain or an uncaughtException event handler.
  • 2 - Unused (reserved by Bash for builtin misuse)
  • 3 Internal JavaScript Parse Error - The JavaScript source code internal in Node's bootstrapping process caused a parse error. This is extremely rare, and generally can only happen during development of Node itself.
  • 4 Internal JavaScript Evaluation Failure - The JavaScript source code internal in Node's bootstrapping process failed to return a function value when evaluated. This is extremely rare, and generally can only happen during development of Node itself.
  • 5 Fatal Error - There was a fatal unrecoverable error in V8. Typically a message will be printed to stderr with the prefix FATAL ERROR.
  • 6 Non-function Internal Exception Handler - There was an uncaught exception, but the internal fatal exception handler function was somehow set to a non-function, and could not be called.
  • 7 Internal Exception Handler Run-Time Failure - There was an uncaught exception, and the internal fatal exception handler function itself threw an error while attempting to handle it. This can happen, for example, if a process.on('uncaughtException') or domain.on('error') handler throws an error.
  • 8 - Unused. In previous versions of Node, exit code 8 sometimes indicated an uncaught exception.
  • 9 - Invalid Argument - Either an unknown option was specified, or an option requiring a value was provided without a value.
  • 10 Internal JavaScript Run-Time Failure - The JavaScript source code internal in Node's bootstrapping process threw an error when the bootstrapping function was called. This is extremely rare, and generally can only happen during development of Node itself.
  • 12 Invalid Debug Argument - The --debug and/or --debug-brk options were set, but an invalid port number was chosen.
  • >128 Signal Exits - If Node receives a fatal signal such as SIGKILL or SIGHUP, then its exit code will be 128 plus the value of the signal code. This is a standard Unix practice, since exit codes are defined to be 7-bit integers, and signal exits set the high-order bit, and then contain the value of the signal code.

  • 1 未捕获的致命异常(Uncaught Fatal Exception) - There was an uncaught exception, and it was not handled by a domain or an uncaughtException event handler.

  • 2 - 未使用(Unused) (reserved by Bash for builtin misuse)
  • 3 解析错误(Internal JavaScript Parse Error) - The JavaScript source code internal in Node's bootstrapping process caused a parse error. This is extremely rare, and generally can only happen during development of Node itself.
  • 4 评估失败(Internal JavaScript Evaluation Failure) - The JavaScript source code internal in Node's bootstrapping process failed to return a function value when evaluated. This is extremely rare, and generally can only happen during development of Node itself.
  • 5 致命错误(Fatal Error) - There was a fatal unrecoverable error in V8. Typically a message will be printed to stderr with the prefix FATAL ERROR.
  • 6 未正确的异常处理(Non-function Internal Exception Handler) - There was an uncaught exception, but the internal fatal exception handler function was somehow set to a non-function, and could not be called.
  • 7 异常处理函数运行时失败(Internal Exception Handler Run-Time Failure) - There was an uncaught exception, and the internal fatal exception handler function itself threw an error while attempting to handle it. This can happen, for example, if a process.on('uncaughtException') or domain.on('error') handler throws an error.
  • 8 - 未使用(Unused). In previous versions of Node, exit code 8 sometimes indicated an uncaught exception.
  • 9 - 无效的参数(Invalid Argument) - Either an unknown option was specified, or an option requiring a value was provided without a value.
  • 10 运行时失败(Internal JavaScript Run-Time Failure) - The JavaScript source code internal in Node's bootstrapping process threw an error when the bootstrapping function was called. This is extremely rare, and generally can only happen during development of Node itself.
  • 12 无效的调试参数(Invalid Debug Argument) - The --debug and/or --debug-brk options were set, but an invalid port number was chosen.
  • >128 信号退出(Signal Exits) - If Node receives a fatal signal such as SIGKILL or SIGHUP, then its exit code will be 128 plus the value of the signal code. This is a standard Unix practice, since exit codes are defined to be 7-bit integers, and signal exits set the high-order bit, and then contain the value of the signal code.

事件: 'exit'#

Emitted when the process is about to exit. This is a good hook to perform constant time checks of the module's state (like for unit tests). The main event loop will no longer be run after the 'exit' callback finishes, so timers may not be scheduled.

当进程将要退出时触发。这是一个在固定时间检查模块状态(如单元测试)的好时机。需要注意的是 'exit' 的回调结束后,主事件循环将不再运行,所以计时器也会失效。

Example of listening for exit:

监听 exit 事件的例子:

process.on('exit', function() {
  // 设置一个延迟执行
  setTimeout(function() {
    console.log('主事件循环已停止,所以不会执行');
  }, 0);
  console.log('退出前执行');
});

事件: 'uncaughtException'(未捕获错误)#

Emitted when an exception bubbles all the way back to the event loop. If a listener is added for this exception, the default action (which is to print a stack trace and exit) will not occur.

当一个异常冒泡回归到事件循环中就会触发这个事件,如果建立了一个监听器来监听这个异常,默认的行为(打印堆栈跟踪信息并退出)就不会发生。

Example of listening for uncaughtException:

监听 uncaughtException 示例:

// 故意制造一个异常,而且不catch捕获它.
nonexistentFunc();
console.log('This will not run.');

Note that uncaughtException is a very crude mechanism for exception handling.

注意,uncaughtException未捕获异常是一个非常粗略的异常处理。

Don't use it, use domains instead. If you do use it, restart your application after every unhandled exception!

尽量不要使用它,使用 domains 来代替它,如果你已经使用了,请在不处理这个异常之后重启你的应用。

Do not use it as the node.js equivalent of On Error Resume Next. An unhandled exception means your application - and by extension node.js itself - is in an undefined state. Blindly resuming means anything could happen.

不要 象使用node.js的有错误回复执行这样使用.一个未处理异常意味着你的应用和你的扩展Node.js自身是有未知状态的。盲目的恢复意味着任何事情都可能发生。

Think of resuming as pulling the power cord when you are upgrading your system. Nine out of ten times nothing happens - but the 10th time, your system is bust.

你在升级的系统时拉掉了电源线,然后恢复了。可能10次里有9次每一偶问题,但是第10次,你的系统就会崩溃。

You have been warned.

你已经被警告。

Signal Events#

Emitted when the processes receives a signal. See sigaction(2) for a list of standard POSIX signal names such as SIGINT, SIGUSR1, etc.

当进程接收到信号时触发。信号列表详见 POSIX 标准的 sigaction(2)如 SIGINT、SIGUSR1 等。

Example of listening for SIGINT:

监听 SIGINT 信号的示例:

// 设置 'SIGINT' 信号触发事件
process.on('SIGINT', function() {
  console.log('收到 SIGINT 信号。  退出请使用 Ctrl + D ');
});

An easy way to send the SIGINT signal is with Control-C in most terminal programs.

在大多数终端下,一个发送 SIGINT 信号的简单方法是按下 ctrl + c

process.stdout#

A Writable Stream to stdout.

一个指向标准输出流(stdout)可写的流(Writable Stream)

Example: the definition of console.log

举例: console.log 的实现

console.log = function(d) {
  process.stdout.write(d + '\n');
}; 

process.stderr and process.stdout are unlike other streams in Node in that writes to them are usually blocking. They are blocking in the case that they refer to regular files or TTY file descriptors. In the case they refer to pipes, they are non-blocking like other streams.

process.stderr 和 process.stdout 不像 Node 中其他的流(Streams) 那样,他们通常是阻塞式的写入。当其引用指向 普通文件 或者 TTY文件描述符 时他们就是阻塞的(注:TTY 可以理解为终端的一种,可联想 PuTTY,详见百科)。当他们引用指向管道(pipes)时,他们就同其他的流(Streams)一样是非阻塞的。

To check if Node is being run in a TTY context, read the isTTY property on process.stderr, process.stdout, or process.stdin:

要检查 Node 是否正在运行一个 TTY上下文 中(注:linux 中没有运行在 tty 下的进程是 守护进程 ),可以用使用 process.stderr、process.stdout 或 process.stdin 的 isTTY 属性:

$ node -p "Boolean(process.stdout.isTTY)"
true
$ node -p "Boolean(process.stdout.isTTY)" | cat
false 

See the tty docs for more information.

更多信息,请查看 tty 文档

process.stderr#

A writable stream to stderr.

一个指向标准错误流(stderr)的 可写的流(Writable Stream)。

process.stderr and process.stdout are unlike other streams in Node in that writes to them are usually blocking. They are blocking in the case that they refer to regular files or TTY file descriptors. In the case they refer to pipes, they are non-blocking like other streams.

process.stderr 和 process.stdout 不像 Node 中其他的流(Streams) 那样,他们通常是阻塞式的写入。当其引用指向 普通文件 或者 TTY文件描述符 时他们就是阻塞的(注:TTY 可以理解为终端的一种,可联想 PuTTY,详见百科)。当他们引用指向管道(pipes)时,他们就同其他的流(Streams)一样是非阻塞的。

process.stdin#

A Readable Stream for stdin. The stdin stream is paused by default, so one must call process.stdin.resume() to read from it.

一个指向 标准输入流(stdin) 的可读流(Readable Stream)。标准输入流默认是暂停 (pause) 的,所以必须要调用 process.stdin.resume() 来恢复 (resume) 接收。

Example of opening standard input and listening for both events:

打开标准输入流,并监听两个事件的示例:

process.stdin.on('end', function() {
  process.stdout.write('end');
});


// gets 函数的简单实现
function gets(cb){
  process.stdin.resume();
  process.stdin.setEncoding('utf8');

  process.stdin.on('data', function(chunk) {
     process.stdin.pause();
     cb(chunk);
  });
}

gets(function(reuslt){
  console.log("["+reuslt+"]");
});

process.argv#

An array containing the command line arguments. The first element will be 'node', the second element will be the name of the JavaScript file. The next elements will be any additional command line arguments.

一个包含命令行参数的数组。第一个元素会是 'node', 第二个元素将是 .Js 文件的名称。接下来的元素依次是命令行传入的参数。

// 打印 process.argv
process.argv.forEach(function(val, index, array) {
  console.log(index + ': ' + val);
});

This will generate:

输出将会是:

$ node process-2.js one two=three four
0: node
1: /Users/mjr/work/node/process-2.js
2: one
3: two=three
4: four 

process.execPath#

This is the absolute pathname of the executable that started the process.

开启当前进程的这个可执行文件的绝对路径。

Example:

示例:

/usr/local/bin/node 

process.execArgv#

This is the set of node-specific command line options from the executable that started the process. These options do not show up in process.argv, and do not include the node executable, the name of the script, or any options following the script name. These options are useful in order to spawn child processes with the same execution environment as the parent.

process.argv 类似,不过是用于保存 node特殊(node-specific) 的命令行选项(参数)。这些特殊的选项不会出现在 process.argv 中,而且 process.execArgv 不会保存 process.argv 中保存的参数(如 0:node 1:文件名 2.3.4.参数 等), 所有文件名之后的参数都会被忽视。这些选项可以用于派生与与父进程相同执行环境的子进程。

Example:

示例:

$ node --harmony script.js --version 

results in process.execArgv:

process.execArgv 中的特殊选项:

['--harmony'] 

and process.argv:

process.argv 接收到的参数:

['/usr/local/bin/node', 'script.js', '--version'] 

process.abort()#

This causes node to emit an abort. This will cause node to exit and generate a core file.

这将导致 Node 触发一个abort事件,这会导致Node退出并且创建一个核心文件。

process.chdir(directory)#

Changes the current working directory of the process or throws an exception if that fails.

改变进程的当前进程的工作目录,若操作失败则抛出异常。

console.log('当前目录:' + process.cwd());
try {
  process.chdir('/tmp');
  console.log('新目录:' + process.cwd());
}
catch (err) {
  console.log('chdir: ' + err);
}

process.cwd()#

Returns the current working directory of the process.

返回进程当前的工作目录。

console.log('当前目录:' + process.cwd());

process.env#

An object containing the user environment. See environ(7).

一个包括用户环境的对象。详细参见 environ(7)。

process.exit([code])#

Ends the process with the specified code. If omitted, exit uses the 'success' code 0.

终止当前进程并返回给定的 code。如果省略了 code,退出是会默认返回成功的状态码('success' code) 也就是 0

To exit with a 'failure' code:

退出并返回失败的状态 ('failure' code):

process.exit(1); 

The shell that executed node should see the exit code as 1.

执行上述代码,用来执行 node 的 shell 就能收到值为 1 的 exit code

process.exitCode#

A number which will be the process exit code, when the process either exits gracefully, or is exited via process.exit() without specifying a code.

当进程既正常退出,或者通过未指定 code 的 process.exit() 退出时,这个属性中所存储的数字将会成为进程退出的错误码 (exit code)。

Specifying a code to process.exit(code) will override any previous setting of process.exitCode.

如果指名了 process.exit(code) 中退出的错误码 (code),则会覆盖掉 process.exitCode 的设置。

process.getgid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Gets the group identity of the process. (See getgid(2).) This is the numerical group id, not the group name.

获取进程的群组标识(详见getgid(2))。获取到的是群组的数字ID,不是群组名称。

if (process.getgid) {
  console.log('当前 gid: ' + process.getgid());
}

process.setgid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Sets the group identity of the process. (See setgid(2).) This accepts either a numerical ID or a groupname string. If a groupname is specified, this method blocks while resolving it to a numerical ID.

设置进程的群组标识(详见getgid(2))。参数可以是一个数字ID或者群组名字符串。如果指定了一个群组名,这个方法会阻塞等待将群组名解析为数字ID。

if (process.getgid && process.setgid) {
  console.log('当前 gid: ' + process.getgid());
  try {
    process.setgid(501);
    console.log('新 gid: ' + process.getgid());
  }
  catch (err) {
    console.log('设置 gid 失败: ' + err);
  }
}

process.getuid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Gets the user identity of the process. (See getuid(2).) This is the numerical userid, not the username.

获取执行进程的用户ID(详见getgid(2))。这是用户的数字ID,不是用户名。

if (process.getuid) {
  console.log('当前 uid: ' + process.getuid());
}

process.setuid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Sets the user identity of the process. (See setuid(2).) This accepts either a numerical ID or a username string. If a username is specified, this method blocks while resolving it to a numerical ID.

设置执行进程的用户ID(详见getgid(2))。参数可以使一个数字ID或者用户名字符串。如果指定了一个用户名,那么该方法会阻塞等待将用户名解析为数字ID。

if (process.getuid && process.setuid) {
  console.log('当前 uid: ' + process.getuid());
  try {
    process.setuid(501);
    console.log('新 uid: ' + process.getuid());
  }
  catch (err) {
    console.log('设置 uid 失败: ' + err);
  }
}

process.getgroups()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Returns an array with the supplementary group IDs. POSIX leaves it unspecified if the effective group ID is included but node.js ensures it always is.

返回一个保存补充组ID(supplementary group ID)的数组。POSIX 标准没有指名 如果有效组 ID(effective group ID)被包括在内的情况,而在 node.js 中则确保它始终是。(POSIX leaves it unspecified if the effective group ID is included but node.js ensures it always is. )

process.setgroups(groups)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Sets the supplementary group IDs. This is a privileged operation, meaning you need to be root or have the CAP_SETGID capability.

设置补充分组的ID标识. 这是一个特殊的操作, 意味着你必须拥有root或者CAP_SETGID权限才可以。(译者:CAP_SETGID表示设定程序允许普通用户使用setgid函数,这与文件的setgid权限位无关)

The list can contain group IDs, group names or both.

这个列表可以包括分组的ID表示,或分组名或两者都有。

process.initgroups(user, extra_group)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Reads /etc/group and initializes the group access list, using all groups of which the user is a member. This is a privileged operation, meaning you need to be root or have the CAP_SETGID capability.

读取 /etc/group 并且初始化group分组访问列表,使用改成员所在的所有分组, 这是一个特殊的操作, 意味着你必须拥有root或者CAP_SETGID权限才可以。

user is a user name or user ID. extra_group is a group name or group ID.

user 是一个用户名或者用户ID. extra_group是分组的组名或者分组ID。

Some care needs to be taken when dropping privileges. Example:

有时候,当你在注销权限 (dropping privileges) 的时候需要注意。例如:

console.log(process.getgroups());         // [ 0 ]
process.initgroups('bnoordhuis', 1000);   // switch user
console.log(process.getgroups());         // [ 27, 30, 46, 1000, 0 ]
process.setgid(1000);                     // drop root gid
console.log(process.getgroups());         // [ 27, 30, 46, 1000 ]

process.version#

A compiled-in property that exposes NODE_VERSION.

一个暴露编译时存储版本信息的内置变量 NODE_VERSION 的属性。

console.log('版本: ' + process.version);

process.versions#

A property exposing version strings of node and its dependencies.

一个暴露存储 node 以及其依赖包 版本信息的属性。

console.log(process.versions); 

Will print something like:

输出:

{ http_parser: '1.0',
  node: '0.10.4',
  v8: '3.14.5.8',
  ares: '1.9.0-DEV',
  uv: '0.10.3',
  zlib: '1.2.3',
  modules: '11',
  openssl: '1.0.1e' }

process.config#

An Object containing the JavaScript representation of the configure options that were used to compile the current node executable. This is the same as the "config.gypi" file that was produced when running the ./configure script.

一个包含用来编译当前 node.exe 的配置选项的对象。内容与运行 ./configure 脚本生成的 "config.gypi" 文件相同。

An example of the possible output looks like:

最可能的输出示例如下:

{ target_defaults:
   { cflags: [],
     default_configuration: 'Release',
     defines: [],
     include_dirs: [],
     libraries: [] },
  variables:
   { host_arch: 'x64',
     node_install_npm: 'true',
     node_prefix: '',
     node_shared_cares: 'false',
     node_shared_http_parser: 'false',
     node_shared_libuv: 'false',
     node_shared_v8: 'false',
     node_shared_zlib: 'false',
     node_use_dtrace: 'false',
     node_use_openssl: 'true',
     node_shared_openssl: 'false',
     strict_aliasing: 'true',
     target_arch: 'x64',
     v8_use_snapshot: 'true' } }

process.kill(pid, [signal])#

Send a signal to a process. pid is the process id and signal is the string describing the signal to send. Signal names are strings like 'SIGINT' or 'SIGUSR1'. If omitted, the signal will be 'SIGTERM'. See kill(2) for more information.

向进程发送一个信号。 pid 是进程的 id 而 signal 则是描述信号的字符串名称。信号的名称都形似 'SIGINT' 或者 'SIGUSR1'。如果没有指定参数则会默认发送 'SIGTERM' 信号,更多信息请查看 kill(2) 。

Note that just because the name of this function is process.kill, it is really just a signal sender, like the kill system call. The signal sent may do something other than kill the target process.

值得注意的是,这个函数的名称虽然是 process.kill, 但就像 kill 系统调用(详见《Unix高级编程》)一样,它仅仅只是一个信号发送器。而信号的发送不仅仅只是用来杀死(kill)目标进程。

Example of sending a signal to yourself:

向当前进程发送信号的示例:

process.kill(process.pid, 'SIGHUP'); 

process.pid#

The PID of the process.

当前进程的 PID

console.log('当前进程 id: ' + process.pid);

process.title#

Getter/setter to set what is displayed in 'ps'.

获取/设置 (Getter/setter) 'ps' 中显示的进程名。

When used as a setter, the maximum length is platform-specific and probably short.

当设置该属性时,所能设置的字符串最大长度视具体平台而定,如果超过的话会自动截断。

On Linux and OS X, it's limited to the size of the binary name plus the length of the command line arguments because it overwrites the argv memory.

在 Linux 和 OS X 上,它受限于名称的字节长度加上命令行参数的长度,因为它有覆盖参数内存(argv memory)。

v0.8 allowed for longer process title strings by also overwriting the environ memory but that was potentially insecure/confusing in some (rather obscure) cases.

v0.8 版本允许更长的进程标题字符串,也支持覆盖环境内存,但是存在潜在的不安全和混乱(很难说清楚)。

process.arch#

What processor architecture you're running on: 'arm', 'ia32', or 'x64'.

返回当前 CPU 处理器的架构:'arm'、'ia32' 或者 'x64'.

console.log('当前CPU架构是:' + process.arch);

process.platform#

What platform you're running on: 'darwin', 'freebsd', 'linux', 'sunos' or 'win32'

返回当前程序运行的平台:'darwin', 'freebsd', 'linux', 'sunos' 或者 'win32'

console.log('当前系统平台是: ' + process.platform);

process.memoryUsage()#

Returns an object describing the memory usage of the Node process measured in bytes.

返回一个对象,它描述了Node进程的内存使用情况单位是bytes。

console.log(util.inspect(process.memoryUsage())); 

This will generate:

输出将会是:

{ rss: 4935680,
  heapTotal: 1826816,
  heapUsed: 650472 } 

heapTotal and heapUsed refer to V8's memory usage.

heapTotalheapUsed 是根据 V8引擎的内存使用情况来的

process.nextTick(callback)#

  • callback Function

  • callback Function

Once the current event loop turn runs to completion, call the callback function.

在事件循环的下一次循环中调用 callback 回调函数。

This is not a simple alias to setTimeout(fn, 0), it's much more efficient. It runs before any additional I/O events (including timers) fire in subsequent ticks of the event loop.

不是 setTimeout(fn, 0) 函数的一个简单别名,因为它的效率高多了。该函数能在任何 I/O 事前之前调用我们的回调函数。但是这个函数在层次超过某个限制的时候,也会出现瑕疵,详细见 process.maxTickDepth

console.log('开始');
process.nextTick(function() {
  console.log('nextTick 回调');
});
console.log('已设定');
// 输出:
// 开始
// 已设定
// nextTick 回调

This is important in developing APIs where you want to give the user the chance to assign event handlers after an object has been constructed, but before any I/O has occurred.

如果你想要在【对象创建】之后而【I/O 操作】发生之前执行某些操作,那么这个函数对你而言就十分重要了。

// thing.startDoingStuff() 现在被调用了, 而不是之前.

It is very important for APIs to be either 100% synchronous or 100% asynchronous. Consider this example:

【注意!!】保证你的函数一定是同步执行或者一定是异步执行,这非常重要!!参考如下的例子:

  fs.stat('file', cb);
} 

This API is hazardous. If you do this:

这样执行是很危险。如果你还不清楚上述行为的危害请看下面的例子:

maybeSync(true, function() {
  foo();
});
bar(); 

then it's not clear whether foo() or bar() will be called first.

那么,使用刚才那个不知道是同步还是异步的操作,在编程的时候你就会发现,你不能确定到底是 foo() 先执行,还是 bar() 先执行。

This approach is much better:

用下面的方法就可以更好的解决:

  fs.stat('file', cb);
} 

Note: the nextTick queue is completely drained on each pass of the event loop before additional I/O is processed. As a result, recursively setting nextTick callbacks will block any I/O from happening, just like a while(true); loop.

注意:nextTick 的队列会在完全执行完毕之后才调用 I/O 操作 (the nextTick queue is completely drained on each pass of the event loop before additional I/O is processed.) 。因此,递归设置 nextTick 的回调就像一个 while(true) ; 循环一样,将会阻止任何 I/O 操作的发生。

process.umask([mask])#

Sets or reads the process's file mode creation mask. Child processes inherit the mask from the parent process. Returns the old mask if mask argument is given, otherwise returns the current mask.

设置或者读取进程的文件模式的创建掩码。子进程从父进程中继承这个掩码。如果设定了参数 mask 那么返回旧的掩码,否则返回当前的掩码。

oldmask = process.umask(newmask);
console.log('原掩码: ' + oldmask.toString(8) + '\n'
            '新掩码: ' + newmask.toString(8));

process.uptime()#

Number of seconds Node has been running.

返回 Node 程序已运行的秒数。

process.hrtime()#

Returns the current high-resolution real time in a [seconds, nanoseconds] tuple Array. It is relative to an arbitrary time in the past. It is not related to the time of day and therefore not subject to clock drift. The primary use is for measuring performance between intervals.

返回当前的高分辨时间,形式为 [秒,纳秒] 的元组数组。它是相对于在过去的任意时间。该值与日期无关,因此不受时钟漂移的影响。主要用途是可以通过精确的时间间隔,来衡量程序的性能。

You may pass in the result of a previous call to process.hrtime() to get a diff reading, useful for benchmarks and measuring intervals:

你可以将前一个 process.hrtime() 的结果传递给当前的 process.hrtime() 函数,结果会返回一个比较值,用于基准和衡量时间间隔。

  console.log('基准相差 %d 纳秒', diff[0] * 1e9 + diff[1]);
  // 基准相差 1000000527 纳秒
}, 1000);

utils#

稳定度: 4 - 冻结

These functions are in the module 'util'. Use require('util') to access them.

如果你想使用模块 'util'中已定义的方法. 只需 require('util') 即可使用.

The util module is primarily designed to support the needs of Node's internal APIs. Many of these utilities are useful for your own programs. If you find that these functions are lacking for your purposes, however, you are encouraged to write your own utilities. We are not interested in any future additions to the util module that are unnecessary for Node's internal functionality.

util模块设计的主要目的是为了满足Node内部API的需求 。这个模块中的很多方法在你编写Node程序的时候都是很有帮助的。如果你觉得提供的这些方法满足不了你的需求,那么我们鼓励你编写自己的实用工具方法。我们 不希望util模块中添加任何对于Node的内部功能非必要的扩展。

util.debuglog(section)#

  • section String The section of the program to be debugged
  • Returns: Function The logging function

  • section String 被调试的程序节点部分

  • 返回值: Function 日志处理函数

This is used to create a function which conditionally writes to stderr based on the existence of a NODE_DEBUG environment variable. If the section name appears in that environment variable, then the returned function will be similar to console.error(). If not, then the returned function is a no-op.

这个方法是在存在NODE_DEBUG环境变量的基础上,创建一个有条件写到stderr里的函数。如果“节点”的名字出现在这个环境变量里,那么就返回一个功能类似于console.error()的函数.如果不是,那么返回一个空函数.

For example:

例如:

var bar = 123; debuglog('hello from foo [%d]', bar);


If this program is run with `NODE_DEBUG=foo` in the environment, then
it will output something like:

如果这个程序以`NODE_DEBUG=foo` 的环境运行,那么它将会输出:

    FOO 3245: hello from foo [123]

where `3245` is the process id.  If it is not run with that
environment variable set, then it will not print anything.

`3245`是进程的ID, 如果程序不以刚才那样设置的环境变量运行,那么将不会输出任何东西。

You may separate multiple `NODE_DEBUG` environment variables with a
comma.  For example, `NODE_DEBUG=fs,net,tls`.

多个`NODE_DEBUG`环境变量,你可以用逗号进行分割。例如,`NODE_DEBUG= fs, net, tls`。

## util.format(format, [...])

Returns a formatted string using the first argument as a `printf`-like format.

根据第一个参数,返回一个格式化字符串,类似`printf`的格式化输出。

The first argument is a string that contains zero or more *placeholders*.
Each placeholder is replaced with the converted value from its corresponding
argument. Supported placeholders are:

第一个参数是一个字符串,包含零个或多个*占位符*。
每一个占位符被替换为与其对应的转换后的值。
支持的占位符有:

* `%s` - String.
* `%d` - Number (both integer and float).
* `%j` - JSON.  Replaced with the string `'[Circular]'` if the argument
         contains circular references.
* `%%` - single percent sign (`'%'`). This does not consume an argument.

* `%s` - 字符串.
* `%d` - 数字 (整型和浮点型).
* `%j` - JSON. 如果这个参数包含循环对象的引用,将会被替换成字符串 `'[Circular]'`。
* `%%` - 单独一个百分号(`'%'`)。不会消耗一个参数。

If the placeholder does not have a corresponding argument, the placeholder is
not replaced.

如果占位符没有相对应的参数,占位符将不会被替换。

    util.format('%s:%s', 'foo'); // 'foo:%s'

If there are more arguments than placeholders, the extra arguments are
converted to strings with `util.inspect()` and these strings are concatenated,
delimited by a space.

如果有多个参数占位符,额外的参数将会调用`util.inspect()`转换为字符串。这些字符串被连接在一起,并且以空格分隔。

    util.format('%s:%s', 'foo', 'bar', 'baz'); // 'foo:bar baz'

If the first argument is not a format string then `util.format()` returns
a string that is the concatenation of all its arguments separated by spaces.
Each argument is converted to a string with `util.inspect()`.

如果第一个参数是一个非格式化字符串,那么`util.format()`将会把所有的参数转成字符串,以空格隔开,拼接在一块,并返回该字符串。`util.inspect()`会把每个参数都转成一个字符串。

    util.format(1, 2, 3); // '1 2 3'

## util.log(string)

Output with timestamp on `stdout`.

在控制台进行输出,并带有时间戳。

    示例:require('util').log('Timestamped message.');

## util.inspect(object, [options])

Return a string representation of `object`, which is useful for debugging.

返回一个对象的字符串表现形式, 在代码调试的时候非常有用.

An optional *options* object may be passed that alters certain aspects of the
formatted string:

可以通过加入一些可选选项,来改变对象的格式化输出形式:

 - `showHidden` - if `true` then the object's non-enumerable properties will be
   shown too. Defaults to `false`.

 - `showHidden` - 如果设为 `true`,那么该对象的不可枚举的属性将会被显示出来。默认为`false`.

 - `depth` - tells `inspect` how many times to recurse while formatting the
   object. This is useful for inspecting large complicated objects. Defaults to
   `2`. To make it recurse indefinitely pass `null`.

 - `depth` - 告诉 `inspect` 格式化对象的时候递归多少次。这个选项在格式化复杂对象的时候比较有用。 默认为
   `2`。如果想无穷递归下去,则赋值为`null`即可。

 - `colors` - if `true`, then the output will be styled with ANSI color codes.
   Defaults to `false`. Colors are customizable, see below.

 - `colors` - 如果设为`true`,将会以`ANSI`颜色代码风格进行输出.
   默认是`false`。颜色是可定制的,请看下面:

 - `customInspect` - if `false`, then custom `inspect(depth, opts)` functions
   defined on the objects being inspected won't be called. Defaults to `true`.

 - `customInspect` - 如果设为 `false`,那么定义在被检查对象上的`inspect(depth, opts)` 方法将不会被调用。 默认为`true`。

Example of inspecting all properties of the `util` object:

示例:检查`util`对象上的所有属性

    console.log(util.inspect(util, { showHidden: true, depth: null }));

Values may supply their own custom `inspect(depth, opts)` functions, when
called they receive the current depth in the recursive inspection, as well as
the options object passed to `util.inspect()`.

当被调用的时候,参数值可以提供自己的自定义`inspect(depth, opts)`方法。该方法会接收当前的递归检查深度,以及传入`util.inspect()`的其他参数。

### 自定义 `util.inspect` 颜色

<!-- type=misc -->

Color output (if enabled) of `util.inspect` is customizable globally
via `util.inspect.styles` and `util.inspect.colors` objects.

`util.inspect`彩色输出(如果启用的话) ,可以通过`util.inspect.styles` 和 `util.inspect.colors` 来全局定义。

`util.inspect.styles` is a map assigning each style a color
from `util.inspect.colors`.
Highlighted styles and their default values are:
 * `number` (yellow)
 * `boolean` (yellow)
 * `string` (green)
 * `date` (magenta)
 * `regexp` (red)
 * `null` (bold)
 * `undefined` (grey)
 * `special` - only function at this time (cyan)
 * `name` (intentionally no styling)

`util.inspect.styles`是通过`util.inspect.colors`分配给每个风格颜色的一个映射。
高亮风格和它们的默认值:
 * `number` (黄色)
 * `boolean` (黄色)
 * `string` (绿色)
 * `date` (洋红色)
 * `regexp` (红色)
 * `null` (粗体)
 * `undefined` (灰色)
 * `special` - 在这个时候的唯一方法 (青绿色)
 * `name` (无风格)

Predefined color codes are: `white`, `grey`, `black`, `blue`, `cyan`, 
`green`, `magenta`, `red` and `yellow`.
There are also `bold`, `italic`, `underline` and `inverse` codes.

预定义的颜色代码: `white`, `grey`, `black`, `blue`, `cyan`, 
`green`, `magenta`, `red` 和 `yellow`。
还有 `bold`, `italic`, `underline` 和 `inverse` 代码。

### 自定义对象的`inspect()`方法

<!-- type=misc -->

Objects also may define their own `inspect(depth)` function which `util.inspect()`
will invoke and use the result of when inspecting the object:

对象可以定义自己的 `inspect(depth)`方法;当使用`util.inspect()`检查该对象的时候,将会执行对象自定义的检查方法。

    util.inspect(obj);
      // "{nate}"

You may also return another Object entirely, and the returned String will be
formatted according to the returned Object. This is similar to how
`JSON.stringify()` works:

您也可以返回完全不同的另一个对象,而且返回的字符串将被根据返回的对象格式化。它和`JSON.stringify()`工作原理类似:

    util.inspect(obj);
      // "{ bar: 'baz' }"

## util.isArray(object)

Returns `true` if the given "object" is an `Array`. `false` otherwise.

如果给定的对象是`数组`类型,就返回`true`,否则返回`false`

    util.isArray([])
      // true
    util.isArray(new Array)
      // true
    util.isArray({})
      // false

## util.isRegExp(object)

Returns `true` if the given "object" is a `RegExp`. `false` otherwise.

如果给定的对象是`RegExp`类型,就返回`true`,否则返回`false`。

    util.isRegExp(/some regexp/)
      // true
    util.isRegExp(new RegExp('another regexp'))
      // true
    util.isRegExp({})
      // false

## util.isDate(object)

Returns `true` if the given "object" is a `Date`. `false` otherwise.

如果给定的对象是`Date`类型,就返回`true`,否则返回`false`。

    util.isDate(new Date())
      // true
    util.isDate(Date())
      // false (没有关键字 'new' 返回一个字符串)
    util.isDate({})
      // false

## util.isError(object)

Returns `true` if the given "object" is an `Error`. `false` otherwise.

如果给定的对象是`Error`类型,就返回`true`,否则返回`false`。

    util.isError(new Error())
      // true
    util.isError(new TypeError())
      // true
    util.isError({ name: 'Error', message: 'an error occurred' })
      // false

## util.inherits(constructor, superConstructor)

Inherit the prototype methods from one
[constructor](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/constructor)
into another.  The prototype of `constructor` will be set to a new
object created from `superConstructor`.

通过[构造函数](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/constructor),继承原型对象上的方法。构造函数的`原型`将被设置为一个新的
从`超类`创建的对象。

As an additional convenience, `superConstructor` will be accessible
through the `constructor.super_` property.

你可以很方便的通过 `constructor.super_`来访问到`superConstructor` 

    stream.on("data", function(data) {
        console.log('Received data: "' + data + '"');
    })
    stream.write("It works!"); // 输出结果:Received data: "It works!"

## util.debug(string)

    稳定度: 0 - 已过时: 请使用 console.error() 代替

Deprecated predecessor of `console.error`.

`console.error`的已过时的前身

## util.error([...])

    稳定度: 0 - 已过时: 请使用 console.error() 代替

Deprecated predecessor of `console.error`.

`console.error`的已过时的前身

## util.puts([...])

   稳定度: 0 - 已过时: 请使用 console.log() 代替

Deprecated predecessor of `console.log`.

`console.log`的已过时的前身

## util.print([...])

   稳定度: 0 - 已过时: 请使用 console.log() 代替

Deprecated predecessor of `console.log`.

`console.log`的已过时的前身

## util.pump(readableStream, writableStream, [callback])

   稳定度: 0 - 已过时: 请使用readableStream.pipe(writableStream)代替

Deprecated predecessor of `stream.pipe()`.


`stream.pipe()`的已过时的前身

# 事件 (Events)

    稳定度: 4 - 冻结

<!--type=module-->

Many objects in Node emit events: a `net.Server` emits an event each time
a peer connects to it, a `fs.readStream` emits an event when the file is
opened. All objects which emit events are instances of `events.EventEmitter`.
You can access this module by doing: `require("events");`

Node里面的许多对象都会分发事件:一个`net.Server`对象会在每次有新连接时分发一个事件, 一个`fs.readStream`对象会在文件被打开的时候发出一个事件。
所有这些产生事件的对象都是 `events.EventEmitter` 的实例。
你可以通过`require("events");`来访问该模块

Typically, event names are represented by a camel-cased string, however,
there aren't any strict restrictions on that, as any string will be accepted.

通常,事件名是驼峰命名 (camel-cased) 的字符串。不过也没有强制的要求,任何字符串都是可以使用的。

Functions can then be attached to objects, to be executed when an event
is emitted. These functions are called _listeners_. Inside a listener
function, `this` refers to the `EventEmitter` that the listener was
attached to.

为了处理发出的事件,我们将函数 (Function) 关联到对象上。
我们把这些函数称为 _监听器 (listeners)_。
在监听函数中 `this` 指向当前监听函数所关联的 `EventEmitter` 对象。

## 类: events.EventEmitter

To access the EventEmitter class, `require('events').EventEmitter`.

通过 `require('events').EventEmitter` 获取 EventEmitter 类。

When an `EventEmitter` instance experiences an error, the typical action is
to emit an `'error'` event.  Error events are treated as a special case in node.
If there is no listener for it, then the default action is to print a stack
trace and exit the program.

当 `EventEmitter` 实例遇到错误,通常的处理方法是产生一个 `'error'` 事件,node 对错误事件做特殊处理。
如果程序没有监听错误事件,程序会按照默认行为在打印出 栈追踪信息 (stack trace) 后退出。

All EventEmitters emit the event `'newListener'` when new listeners are
added and `'removeListener'` when a listener is removed.

EventEmitter 会在添加 listener 时触发 `'newListener'` 事件,删除 listener 时触发 `'removeListener'` 事件

### emitter.addListener(event, listener)
### emitter.on(event, listener)

### emitter.addListener(event, listener)
### emitter.on(event, listener)

Adds a listener to the end of the listeners array for the specified event.

添加一个 listener 至特定事件的 listener 数组尾部。

    server.on('connection', function (stream) {
      console.log('someone connected!');
    });

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.once(event, listener)

Adds a **one time** listener for the event. This listener is
invoked only the next time the event is fired, after which
it is removed.

添加一个 **一次性** listener,这个 listener 只会在下一次事件发生时被触发一次,触发完成后就被删除。

    server.once('connection', function (stream) {
      console.log('Ah, we have our first user!');
    });

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.removeListener(event, listener)

Remove a listener from the listener array for the specified event.
**Caution**: changes array indices in the listener array behind the listener.

从一个事件的 listener 数组中删除一个 listener
**注意**:此操作会改变 listener 数组中当前 listener 后的所有 listener 在的下标

    var callback = function(stream) {
      console.log('someone connected!');
    };
    server.on('connection', callback);
    // ...
    server.removeListener('connection', callback);

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.removeAllListeners([event])

Removes all listeners, or those of the specified event.

删除所有 listener,或者删除某些事件 (event) 的 listener

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.setMaxListeners(n)

By default EventEmitters will print a warning if more than 10 listeners are
added for a particular event. This is a useful default which helps finding
memory leaks. Obviously not all Emitters should be limited to 10. This function
allows that to be increased. Set to zero for unlimited.

在默认情况下,EventEmitter 会在多于 10 个 listener 监听某个事件的时候出现警告,此限制在寻找内存泄露时非常有用。
显然,也不是所有的 Emitter 事件都要被限制在 10 个 listener 以下,在这种情况下可以使用这个函数来改变这个限制。

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### EventEmitter.defaultMaxListeners

`emitter.setMaxListeners(n)` sets the maximum on a per-instance basis.
This class property lets you set it for *all* `EventEmitter` instances,
current and future, effective immediately. Use with care.

`emitter.setMaxListeners(n)` 设置每个 emitter 实例的最大监听数。
这个类属性为 **所有** `EventEmitter` 实例设置最大监听数(对所有已创建的实例和今后创建的实例都将立即生效)。
使用时请注意。

Note that `emitter.setMaxListeners(n)` still has precedence over
`EventEmitter.defaultMaxListeners`.

请注意,`emitter.setMaxListeners(n)` 优先于 `EventEmitter.defaultMaxListeners`。

### emitter.listeners(event)

Returns an array of listeners for the specified event.

返回指定事件的 listener 数组

    server.on('connection', function (stream) {
      console.log('someone connected!');
    });
    console.log(util.inspect(server.listeners('connection'))); // [ [Function] ]

### emitter.emit(event, [arg1], [arg2], [...])

Execute each of the listeners in order with the supplied arguments.

使用提供的参数按顺序执行指定事件的 listener

Returns `true` if event had listeners, `false` otherwise.

若事件有 listener 则返回 `true` 否则返回 `false`。

### 类方法: EventEmitter.listenerCount(emitter, event)

Return the number of listeners for a given event.

返回指定事件的 listener 数量

### 事件: 'newListener'

* `event` {String} The event name
* `listener` {Function} The event handler function

* `event` {String} 事件名
* `listener` {Function} 事件处理函数

This event is emitted any time someone adds a new listener.  It is unspecified
if `listener` is in the list returned by `emitter.listeners(event)`.

在添加 listener 时会发生该事件。
此时无法确定 `listener` 是否在 `emitter.listeners(event)` 返回的列表中。

### 事件: 'removeListener'

* `event` {String} The event name
* `listener` {Function} The event handler function

* `event` {String} 事件名
* `listener` {Function} 事件处理函数

This event is emitted any time someone removes a listener.  It is unspecified
if `listener` is in the list returned by `emitter.listeners(event)`.


在移除 listener 时会发生该事件。
此时无法确定 `listener` 是否在 `emitter.listeners(event)` 返回的列表中。

# 域

    稳定度: 2 - 不稳定

Domains provide a way to handle multiple different IO operations as a
single group.  If any of the event emitters or callbacks registered to a
domain emit an `error` event, or throw an error, then the domain object
will be notified, rather than losing the context of the error in the
`process.on('uncaughtException')` handler, or causing the program to
exit immediately with an error code.

Domains 提供了一种方式,即以一个单一的组的形式来处理多个不同的IO操作。如果任何一个注册到domain的事件触发器或回调触发了一个‘error’事件,或者抛出一个错误,那么domain对象将会被通知到。而不是直接让这个错误的上下文从`process.on('uncaughtException')'处理程序中丢失掉,也不会致使程序因为这个错误伴随着错误码立即退出。

## 警告: 不要忽视错误!

<!-- type=misc -->

Domain error handlers are not a substitute for closing down your
process when an error occurs.

Domain error处理程序不是一个在错误发生时,关闭你的进程的替代品

By the very nature of how `throw` works in JavaScript, there is almost
never any way to safely "pick up where you left off", without leaking
references, or creating some other sort of undefined brittle state.

基于'抛出(throw)'在JavaScript中工作的方式,几乎从来没有任何方式能够在‘不泄露引用,不造成一些其他种类的未定义的脆弱状态’的前提下,安全的“从你离开的地方重新拾起(pick up where you left off)”,

The safest way to respond to a thrown error is to shut down the
process.  Of course, in a normal web server, you might have many
connections open, and it is not reasonable to abruptly shut those down
because an error was triggered by someone else.

响应一个被抛出错误的最安全方式就是关闭进程。当然,在一个正常的Web服务器中,你可能会有很多活跃的连接。由于其他触发的错误你去突然关闭这些连接是不合理。

The better approach is send an error response to the request that
triggered the error, while letting the others finish in their normal
time, and stop listening for new requests in that worker.

更好的方法是发送错误响应给那个触发错误的请求,在保证其他人正常完成工作时,停止监听那个触发错误的人的新请求。

In this way, `domain` usage goes hand-in-hand with the cluster module,
since the master process can fork a new worker when a worker
encounters an error.  For node programs that scale to multiple
machines, the terminating proxy or service registry can take note of
the failure, and react accordingly.

在这种方式中,`域`使用伴随着集群模块,由于主过程可以叉新工人时,一个工人发生了一个错误。节点程序规模的多
机,终止代理或服务注册可以注意一下失败,并做出相应的反应。

For example, this is not a good idea:

举例来说,以下就不是一个好想法:

var d = require('domain').create();
d.on('error', function(er) {
  // 这个错误不会导致进程崩溃,但是情况会更糟糕!
  // 虽然我们阻止了进程突然重启动,但是我们已经发生了资源泄露
  // 这种事情的发生会让我们发疯。
  // 不如调用 process.on('uncaughtException')!
  console.log('error, but oh well', er.message);
});
d.run(function() {
  require('http').createServer(function(req, res) {
    handleRequest(req, res);
  }).listen(PORT);
});

By using the context of a domain, and the resilience of separating our program into multiple worker processes, we can react more appropriately, and handle errors with much greater safety.

通过对域的上下文的使用,以及将我们的程序分隔成多个工作进程的反射,我们可以做出更加恰当的反应和更加安全的处理。

// 好一些的做法!

var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;

var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;

if (cluster.isMaster) {
  // In real life, you'd probably use more than just 2 workers,
  // and perhaps not put the master and worker in the same file.
  //
  // You can also of course get a bit fancier about logging, and
  // implement whatever custom logic you need to prevent DoS
  // attacks and other bad behavior.
  //
  // See the options in the cluster documentation.
  //
  // The important thing is that the master does very little,
  // increasing our resilience to unexpected errors.

if (cluster.isMaster) {
  // 在工作环境中,你可能会使用到不止一个工作分支
  // 而且可能不会把主干和分支放在同一个文件中
  //
  //你当然可以通过日志进行猜测,并且对你需要防止的DoS攻击等不良行为实施自定义的逻辑
  //
  // 看集群文件的选项
  //
  // 最重要的是主干非常小,增加了我们抵抗以外错误的可能性。

  cluster.fork();
  cluster.fork();

  cluster.fork();
  cluster.fork();

  cluster.on('disconnect', function(worker) {
    console.error('disconnect!');
    cluster.fork();
  });

  cluster.on('disconnect', function(worker) {
    console.error('disconnect!');
    cluster.fork();
  });

} else {
  // the worker
  //
  // This is where we put our bugs!

} else {
  // 工作进程
  //
  // 这是我们出错的地方

  var domain = require('domain');

  var domain = require('domain');

  // See the cluster documentation for more details about using
  // worker processes to serve requests.  How it works, caveats, etc.

  //看集群文件对于使用工作进程处理请求的更多细节,它是如何工作的,它的警告等等。

  var server = require('http').createServer(function(req, res) {
    var d = domain.create();
    d.on('error', function(er) {
      console.error('error', er.stack);

  var server = require('http').createServer(function(req, res) {
    var d = domain.create();
    d.on('error', function(er) {
      console.error('error', er.stack);

    // 因为req和res在这个域存在之前就被创建,
    // 所以我们需要显式添加它们。
    // 详见下面关于显式和隐式绑定的解释。
    d.add(req);
    d.add(res);

    // Now run the handler function in the domain.
    d.run(function() {
      handleRequest(req, res);
    });
  });
  server.listen(PORT);
}

    // 现在在域里面运行处理器函数。
    d.run(function() {
      handleRequest(req, res);
    });
  });
  server.listen(PORT);
}

    // 这个部分不是很重要。只是一个简单的路由例子。
    // 你会想把你的超级给力的应用逻辑放在这里。
    function handleRequest(req, res) {
      switch(req.url) {
        case '/error':
          // 我们干了一些异步的东西,然后。。。
          setTimeout(function() {
            // 呃。。。
            flerb.bark();
          });
          break;
        default:
          res.end('ok');
      }
    }

对Error(错误)对象的内容添加#

Any time an Error object is routed through a domain, a few extra fields are added to it.

每一次一个Error对象被导向经过一个域,它会添加几个新的字段。

  • error.domain The domain that first handled the error.
  • error.domainEmitter The event emitter that emitted an 'error' event with the error object.
  • error.domainBound The callback function which was bound to the domain, and passed an error as its first argument.
  • error.domainThrown A boolean indicating whether the error was thrown, emitted, or passed to a bound callback function.

  • error.domain 第一个处理这个错误的域。

  • error.domainEmitter 用这个错误对象触发'error'事件的事件分发器。
  • error.domainBound 回调函数,该回调函数被绑定到域,并且一个错误会作为第一参数传递给这个回调函数。
  • error.domainThrown 一个布尔值表明这个错误是否被抛出,分发或者传递给一个绑定的回调函数。

隐式绑定#

If domains are in use, then all new EventEmitter objects (including Stream objects, requests, responses, etc.) will be implicitly bound to the active domain at the time of their creation.

如果多个域正在被使用,那么所有的EventEmitter对象(包括Stream对象,请求,应答等等)会被隐式绑定到它们被创建时的有效域。

Additionally, callbacks passed to lowlevel event loop requests (such as to fs.open, or other callback-taking methods) will automatically be bound to the active domain. If they throw, then the domain will catch the error.

而且,被传递到低层事件分发请求的回调函数(例如fs.open,或者其它接受回调函数的函数)会自动绑定到有效域。如果这些回调函数抛出错误,那么这个域会捕捉到这个错误。

In order to prevent excessive memory usage, Domain objects themselves are not implicitly added as children of the active domain. If they were, then it would be too easy to prevent request and response objects from being properly garbage collected.

为了防止内存的过度使用,Domain对象自己不会作为有效域的子对象被隐式添加到有效域。因为如果这样做的话,会很容易影响到请求和应答对象的正常垃圾回收。

If you want to nest Domain objects as children of a parent Domain, then you must explicitly add them.

如果你在一个父Domain对象里嵌套子Domain对象,那么你需要显式地添加它们。

Implicit binding routes thrown errors and 'error' events to the Domain's error event, but does not register the EventEmitter on the Domain, so domain.dispose() will not shut down the EventEmitter. Implicit binding only takes care of thrown errors and 'error' events.

隐式绑定将被抛出的错误和'error'事件导向到Domain对象的error事件,但不会注册到Domain对象上的EventEmitter对象,所以domain.dispose()不会令EventEmitter对象停止运作。隐式绑定只关心被抛出的错误和 'error'事件。

显式绑定#

Sometimes, the domain in use is not the one that ought to be used for a specific event emitter. Or, the event emitter could have been created in the context of one domain, but ought to instead be bound to some other domain.

有时,正在使用的域并不是某个事件分发器所应属的域。又或者,事件分发器在一个域内被创建,但是应该被绑定到另一个域。

For example, there could be one domain in use for an HTTP server, but perhaps we would like to have a separate domain to use for each request.

例如,对于一个HTTP服务器,可以有一个正在使用的域,但我们可能希望对每一个请求使用一个不同的域。

That is possible via explicit binding.

这可以通过显示绑定来达到。

For example:

例如:

serverDomain.run(function() {
  // 服务器在serverDomain的作用域内被创建
  http.createServer(function(req, res) {
    // req和res同样在serverDomain的作用域内被创建
    // 但是,我们想对于每一个请求使用一个不一样的域。
    // 所以我们首先创建一个域,然后将req和res添加到这个域上。
    var reqd = domain.create();
    reqd.add(req);
    reqd.add(res);
    reqd.on('error', function(er) {
      console.error('Error', er, req.url);
      try {
        res.writeHead(500);
        res.end('Error occurred, sorry.');
      } catch (er) {
        console.error('Error sending 500', er, req.url);
      }
    });
  }).listen(1337);    
});
```

domain.create()#

  • return: Domain

  • return: Domain

Returns a new Domain object.

返回一个新的Domain对象。

类: Domain#

The Domain class encapsulates the functionality of routing errors and uncaught exceptions to the active Domain object.

Domain类封装了将错误和没有被捕捉的异常导向到有效对象的功能。

Domain is a child class of EventEmitter. To handle the errors that it catches, listen to its error event.

Domain是 EventEmitter类的一个子类。监听它的error事件来处理它捕捉到的错误。

domain.run(fn)#

  • fn Function

  • fn Function

Run the supplied function in the context of the domain, implicitly binding all event emitters, timers, and lowlevel requests that are created in that context.

在域的上下文里运行提供的函数,隐式地绑定所有该上下文里创建的事件分发器,计时器和低层请求。

This is the most basic way to use a domain.

这是使用一个域的最基本的方式。

Example:

示例:

var d = domain.create();
d.on('error', function(er) {
  console.error('Caught error!', er);
});
d.run(function() {
  process.nextTick(function() {
    setTimeout(function() { // 模拟几个不同的异步的东西
      fs.open('non-existent file', 'r', function(er, fd) {
        if (er) throw er;
        // 继续。。。
      });
    }, 100);
  });
});

In this example, the d.on('error') handler will be triggered, rather than crashing the program.

在这个例子里, d.on('error') 处理器会被触发,而不是导致程序崩溃。

domain.members#

  • Array

  • Array

An array of timers and event emitters that have been explicitly added to the domain.

一个数组,里面的元素是被显式添加到域里的计时器和事件分发器。

domain.add(emitter)#

  • emitter EventEmitter | Timer emitter or timer to be added to the domain

  • emitter EventEmitter | Timer 被添加到域里的时间分发器或计时器

Explicitly adds an emitter to the domain. If any event handlers called by the emitter throw an error, or if the emitter emits an error event, it will be routed to the domain's error event, just like with implicit binding.

显式地将一个分发器添加到域。如果这个分发器调用的任意一个事件处理器抛出一个错误,或是这个分发器分发了一个error事,那么它会被导向到这个域的error事件,就像隐式绑定所做的一样。

This also works with timers that are returned from setInterval and setTimeout. If their callback function throws, it will be caught by the domain 'error' handler.

这对于从setIntervalsetTimeout返回的计时器同样适用。如果这些计时器的回调函数抛出错误,它将会被这个域的error处理器捕捉到。

If the Timer or EventEmitter was already bound to a domain, it is removed from that one, and bound to this one instead.

如果这个Timer或EventEmitter对象已经被绑定到另外一个域,那么它将会从那个域被移除,然后绑定到当前的域。

domain.remove(emitter)#

  • emitter EventEmitter | Timer emitter or timer to be removed from the domain

  • emitter EventEmitter | Timer 要从域里被移除的分发器或计时器

The opposite of domain.add(emitter). Removes domain handling from the specified emitter.

domain.add(emitter)函数恰恰相反,这个函数将域处理从指明的分发器里移除。

domain.bind(callback)#

  • callback Function The callback function
  • return: Function The bound function

  • callback Function 回调函数

  • return: Function 被绑定的函数

The returned function will be a wrapper around the supplied callback function. When the returned function is called, any errors that are thrown will be routed to the domain's error event.

返回的函数会是一个对于所提供的回调函数的包装函数。当这个被返回的函数被调用时,所有被抛出的错误都会被导向到这个域的error事件。

例子#

d.on('error', function(er) {
  // 有个地方发生了一个错误。
  // 如果我们现在抛出这个错误,它会让整个程序崩溃
  // 并给出行号和栈信息。
});

domain.intercept(callback)#

  • callback Function The callback function
  • return: Function The intercepted function

  • callback Function 回调函数

  • return: Function 被拦截的函数

This method is almost identical to domain.bind(callback). However, in addition to catching thrown errors, it will also intercept Error objects sent as the first argument to the function.

这个函数与domain.bind(callback)几乎一模一样。但是,除了捕捉被抛出的错误外,它还会拦截作为第一参数被传递到这个函数的Error对象。

In this way, the common if (er) return callback(er); pattern can be replaced with a single error handler in a single place.

在这种方式下,常见的'if(er) return callback(er);'的方式可以被一个单独地方的单独的错误处理所取代。

例子#

d.on('error', function(er) {
  // 有个地方发生了一个错误。
  // 如果我们现在抛出这个错误,它会让整个程序崩溃
  // 并给出行号和栈信息。
});

domain.enter()#

The enter method is plumbing used by the run, bind, and intercept methods to set the active domain. It sets domain.active and process.domain to the domain, and implicitly pushes the domain onto the domain stack managed by the domain module (see domain.exit() for details on the domain stack). The call to enter delimits the beginning of a chain of asynchronous calls and I/O operations bound to a domain.

enter函数对于runbindintercept来说就像它们的管道系统:它们使用enter函数来设置有效域。enter函数对于域设定了domain.activeprocess.domain ,还隐式地将域推入了由域模块管理的域栈(关于域栈的细节详见domain.exit())。enter函数的调用,分隔了异步调用链以及绑定到一个域的I/O操作的结束或中断。

Calling enter changes only the active domain, and does not alter the domain itself. Enter and exit can be called an arbitrary number of times on a single domain.

调用enter仅仅改变活动的域,而不改变域本身。 Enterexit在一个单独的域可以被调用任意多次。

If the domain on which enter is called has been disposed, enter will return without setting the domain.

如果域的enter已经设置,enter将不设置域就返回。

domain.exit()#

The exit method exits the current domain, popping it off the domain stack. Any time execution is going to switch to the context of a different chain of asynchronous calls, it's important to ensure that the current domain is exited. The call to exit delimits either the end of or an interruption to the chain of asynchronous calls and I/O operations bound to a domain.

exit函数退出当前的域,将当前域从域的栈里移除。每当当程序的执行流程准要切换到一个不同的异步调用链的上下文时,要保证退出当前的域。exit函数的调用,分隔了异步调用链以及绑定到一个域的I/O操作的结束或中断。

If there are multiple, nested domains bound to the current execution context, exit will exit any domains nested within this domain.

如果有多个嵌套的域绑定到当前的执行上下文, 退出将退出在这个域里的所有的嵌套。

Calling exit changes only the active domain, and does not alter the domain itself. Enter and exit can be called an arbitrary number of times on a single domain.

调用exit只会改变有效域,而不会改变域自身。在一个单一域上,Enterexit可以被调用任意次。

If the domain on which exit is called has been disposed, exit will return without exiting the domain.

如果在这个域名下exit 已经被设置,exit 将不退出域返回。

domain.dispose()#

稳定度: 0 - 已过时。请通过设置在域上的错误事件处理器,显式地东失败的IO操作中恢复。

Once dispose has been called, the domain will no longer be used by callbacks bound into the domain via run, bind, or intercept, and a dispose event is emitted.

一旦dispose被调用,通过runbindintercept绑定到这个域的回调函数将不再使用这个域,并且一个dispose事件会被分发。

Buffer#

稳定度: 3 - 稳定

Pure JavaScript is Unicode friendly but not nice to binary data. When dealing with TCP streams or the file system, it's necessary to handle octet streams. Node has several strategies for manipulating, creating, and consuming octet streams.

纯javascript对Unicode比较友好,但是无法很好地处理二进制数据。当我们面对TCP流或者文件系统的时候,是需要处理八位字节流的。Node有几种操作,创建和消化八位字节流的策略。

Raw data is stored in instances of the Buffer class. A Buffer is similar to an array of integers but corresponds to a raw memory allocation outside the V8 heap. A Buffer cannot be resized.

原始数据保存在 Buffer 类的实例中。一个 Buffer 实例类似于一个整数数组,但对应者 V8 堆之外的一个原始内存分配区域。一个 Buffer 的大小不可变。

The Buffer class is a global, making it very rare that one would need to ever require('buffer').

Buffer 类是一个全局的类,所以它很罕有地不需要require语句就可以调用。

Converting between Buffers and JavaScript string objects requires an explicit encoding method. Here are the different string encodings.

在Buffers和JavaScript string转换时,需要明确的一个编码方法。下面是一些不同的string编码。

  • 'ascii' - for 7 bit ASCII data only. This encoding method is very fast, and will strip the high bit if set.

  • 'ascii' - 仅适用 7 bit ASCII 格式数据。这个编码方式非常快速,而且会剥离设置过高的bit。

  • 'utf8' - Multibyte encoded Unicode characters. Many web pages and other document formats use UTF-8.

  • 'utf8' - 多字节编码 Unicode字符。很多网页或者其他文档的编码格式都是使用 UTF-8的。

  • 'utf16le' - 2 or 4 bytes, little endian encoded Unicode characters. Surrogate pairs (U+10000 to U+10FFFF) are supported.

  • 'utf16le' - 2 或者 4 字节, Little Endian (LE) 编码Unicode字符。 代理对 (U+10000 to U+10FFFF) 是支持的.(BE和LE表示大端和小端,Little-Endian就是低位字节排放在内存的低地址端,高位字节排放在内存的高地址端;Big-Endian就是高位字节排放在内存的低地址端,低位字节排放在内存的高地址端;下同)

  • 'ucs2' - Alias of 'utf16le'.

  • 'ucs2' - 'utf16le'的别名.

  • 'base64' - Base64 string encoding.

  • 'base64' - Base64 字符串编码。

  • 'binary' - A way of encoding raw binary data into strings by using only the first 8 bits of each character. This encoding method is deprecated and should be avoided in favor of Buffer objects where possible. This encoding will be removed in future versions of Node.

  • 'binary' - 一个将原始2进制数据编码为字符串的方法,仅使用每个字符的前8bits。 这个编码方式已经被弃用而且应该被避免,尽可能的使用Buffer对象。这个编码方式将会在未来的Node版本中移除。

  • 'hex' - Encode each byte as two hexadecimal characters.

  • 'hex' - 把每个byte编码成2个十六进制字符

类: Buffer#

The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.

Buffer 类是一个全局变量类型,用来直接处理2进制数据的。 它能够使用多种方式构建。

new Buffer(size)#

  • size Number

  • size Number

Allocates a new buffer of size octets.

分配一个新的 buffer 大小是 size 的8位字节.

new Buffer(array)#

  • array Array

  • array Array

Allocates a new buffer using an array of octets.

分配一个新的 buffer 使用一个8位字节 array 数组.

new Buffer(str, [encoding])#

  • str String - string to encode.
  • encoding String - encoding to use, Optional.

  • str String类型 - 需要存入buffer的string字符串.

  • encoding String类型 - 使用什么编码方式,参数可选.

Allocates a new buffer containing the given str. encoding defaults to 'utf8'.

分配一个新的 buffer ,其中包含着给定的 str字符串. encoding 编码方式默认是:'utf8'.

类方法: Buffer.isEncoding(encoding)#

  • encoding String The encoding string to test

  • encoding String 用来测试给定的编码字符串

Returns true if the encoding is a valid encoding argument, or false otherwise.

如果给定的编码 encoding 是有效的,返回 true,否则返回 false。

类方法: Buffer.isBuffer(obj)#

  • obj Object
  • Return: Boolean

  • obj Object

  • 返回: Boolean

Tests if obj is a Buffer.

测试这个 obj 是否是一个 Buffer.

类方法: Buffer.byteLength(string, [encoding])#

  • string String
  • encoding String, Optional, Default: 'utf8'
  • Return: Number

  • string String类型

  • encoding String类型, 可选参数, 默认是: 'utf8'
  • Return: Number类型

Gives the actual byte length of a string. encoding defaults to 'utf8'. This is not the same as String.prototype.length since that returns the number of characters in a string.

将会返回这个字符串真实byte长度。 encoding 编码默认是: 'utf8'. 这个和 String.prototype.length 是不一样的,因为那个方法返回这个字符串中有几个字符的数量。 (译者:当用户在写http响应头Cotent-Length的时候,千万记得一定要用 Buffer.byteLength 方法,不要使用 String.prototype.length

Example:

示例:

// ½ + ¼ = ¾: 9 characters, 12 bytes

类方法: Buffer.concat(list, [totalLength])#

  • list Array List of Buffer objects to concat
  • totalLength Number Total length of the buffers when concatenated

  • list Array数组类型,Buffer数组,用于被连接。

  • totalLength Number类型 上述Buffer数组的所有Buffer的总大小。(译者:注意这里的totalLength不是数组长度是数组里Buffer实例的大小总和)

Returns a buffer which is the result of concatenating all the buffers in the list together.

返回一个保存着将传入buffer数组中所有buffer对象拼接在一起的buffer对象。(译者:有点拗口,其实就是将数组中所有的buffer实例通过复制拼接在一起)

If the list has no items, or if the totalLength is 0, then it returns a zero-length buffer.

如果传入的数组没有内容,或者 totalLength 参数是0,那将返回一个zero-length的buffer。

If the list has exactly one item, then the first item of the list is returned.

如果数组中只有一项,那么这第一项就会被返回。

If the list has more than one item, then a new Buffer is created.

如果数组中的项多于一个,那么一个新的Buffer实例将被创建。

If totalLength is not provided, it is read from the buffers in the list. However, this adds an additional loop to the function, so it is faster to provide the length explicitly.

如果 totalLength 参数没有提供,虽然会从buffer数组中计算读取,但是会增加一个额外的循环来计算它,所以提供一个明确的 totalLength 参数将会更快。

buf.length#

  • Number

  • Number类型

The size of the buffer in bytes. Note that this is not necessarily the size of the contents. length refers to the amount of memory allocated for the buffer object. It does not change when the contents of the buffer are changed.

这个buffer的bytes大小。注意这未必是这buffer里面内容的大小。length 的依据是buffer对象所分配的内存数值,它不会随着这个buffer对象内容的改变而改变。

// 1234
// 1234

buf.write(string, [offset], [length], [encoding])#

  • string String - data to be written to buffer
  • offset Number, Optional, Default: 0
  • length Number, Optional, Default: buffer.length - offset
  • encoding String, Optional, Default: 'utf8'

  • string String类型 - 将要被写入 buffer 的数据

  • offset Number类型, 可选参数, 默认: 0
  • length Number类型, 可选参数, 默认: buffer.length - offset
  • encoding String类型, 可选参数, 默认: 'utf8'

Writes string to the buffer at offset using the given encoding. offset defaults to 0, encoding defaults to 'utf8'. length is the number of bytes to write. Returns number of octets written. If buffer did not contain enough space to fit the entire string, it will write a partial amount of the string. length defaults to buffer.length - offset. The method will not write partial characters.

根据参数 offset 偏移量和指定的encoding编码方式,将参数 string 数据写入buffer。 offset偏移量 默认是 0, encoding编码方式默认是 'utf8'length长度是将要写入的字符串的bytes大小。 返回number类型,表示多少8位字节流被写入了。如果buffer 没有足够的空间来放入整个string,它将只会写入部分的字符串。 length 默认是 buffer.length - offset。 这个方法不会出现写入部分字符。

buf = new Buffer(256);
len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(len + " bytes: " + buf.toString('utf8', 0, len));

buf.toString([encoding], [start], [end])#

  • encoding String, Optional, Default: 'utf8'
  • start Number, Optional, Default: 0
  • end Number, Optional, Default: buffer.length

  • encoding String类型, 可选参数, 默认: 'utf8'

  • start Number类型, 可选参数, 默认: 0
  • end Number类型, 可选参数, 默认: buffer.length

Decodes and returns a string from buffer data encoded with encoding (defaults to 'utf8') beginning at start (defaults to 0) and ending at end (defaults to buffer.length).

根据 encoding参数(默认是 'utf8')返回一个解码的 string 类型。还会根据传入的参数 start (默认是0) 和 end (默认是 buffer.length)作为取值范围。

See buffer.write() example, above.

查看上面buffer.write() 的例子.

buf.toJSON()#

Returns a JSON-representation of the Buffer instance. JSON.stringify implicitly calls this function when stringifying a Buffer instance.

返回一个 JSON表示的Buffer实例。JSON.stringify将会默认调用来字符串序列化这个Buffer实例。

Example:

示例:

console.log(copy);
// <Buffer 74 65 73 74>

buf[index]#

Get and set the octet at index. The values refer to individual bytes, so the legal range is between 0x00 and 0xFF hex or 0 and 255.

获取或者设置在指定index索引位置的8位字节。这个值是指单个字节,所以这个值必须在合法的范围,16进制的0x000xFF,或者0255

Example: copy an ASCII string into a buffer, one byte at a time:

例子: 拷贝一个 ASCII 编码的 string 字符串到一个 buffer, 一次一个 byte 进行拷贝:

// node.js

buf.copy(targetBuffer, [targetStart], [sourceStart], [sourceEnd])#

  • targetBuffer Buffer object - Buffer to copy into
  • targetStart Number, Optional, Default: 0
  • sourceStart Number, Optional, Default: 0
  • sourceEnd Number, Optional, Default: buffer.length

  • targetBuffer Buffer 类型对象 - 将要进行拷贝的Buffer

  • targetStart Number类型, 可选参数, 默认: 0
  • sourceStart Number类型, 可选参数, 默认: 0
  • sourceEnd Number类型, 可选参数, 默认: buffer.length

Does copy between buffers. The source and target regions can be overlapped. targetStart and sourceStart default to 0. sourceEnd defaults to buffer.length.

进行buffer的拷贝,源和目标可以是重叠的。 targetStart 目标开始偏移 和sourceStart源开始偏移 默认都是 0. sourceEnd 源结束位置偏移默认是源的长度 buffer.length.

All values passed that are undefined/NaN or are out of bounds are set equal to their respective defaults.

如果传递的值是undefined/NaN 或者是 out of bounds 超越边界的,就将设置为他们的默认值。(译者:这个默认值下面有的例子有说明)

Example: build two Buffers, then copy buf1 from byte 16 through byte 19 into buf2, starting at the 8th byte in buf2.

例子: 创建2个Buffer,然后把将buf1的16位到19位 拷贝到 buf2中,并且从buf2的第8位开始拷贝。

// !!!!!!!!qrst!!!!!!!!!!!!!

buf.slice([start], [end])#

  • start Number, Optional, Default: 0
  • end Number, Optional, Default: buffer.length

  • start Number类型, 可选参数, 默认: 0

  • end Number类型, 可选参数, 默认: buffer.length

Returns a new buffer which references the same memory as the old, but offset and cropped by the start (defaults to 0) and end (defaults to buffer.length) indexes. Negative indexes start from the end of the buffer.

返回一个新的buffer,这个buffer将会和老的buffer引用相同的内存地址,只是根据 start (默认是 0) 和end (默认是buffer.length) 偏移和裁剪了索引。 负的索引是从buffer尾部开始计算的。

Modifying the new buffer slice will modify memory in the original buffer!

修改这个新的buffer实例slice切片,也会改变原来的buffer

Example: build a Buffer with the ASCII alphabet, take a slice, then modify one byte from the original Buffer.

例子: 创建一个ASCII 字母的 Buffer,对它slice切片,然后修改源Buffer上的一个byte。

// abc
// !bc

buf.readUInt8(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads an unsigned 8 bit integer from the buffer at the specified offset.

从这个buffer对象里,根据指定的偏移量,读取一个 unsigned 8 bit integer整形。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x3
// 0x4
// 0x23
// 0x42

buf.readUInt16LE(offset, [noAssert])#

buf.readUInt16BE(offset, [noAssert])#

buf.readUInt16LE(offset, [noAssert])#

buf.readUInt16BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads an unsigned 16 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用特殊的 endian字节序格式读取一个 unsigned 16 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x0304
// 0x0403
// 0x0423
// 0x2304
// 0x2342
// 0x4223

buf.readUInt32LE(offset, [noAssert])#

buf.readUInt32BE(offset, [noAssert])#

buf.readUInt32LE(offset, [noAssert])#

buf.readUInt32BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads an unsigned 32 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 unsigned 32 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x03042342
// 0x42230403

buf.readInt8(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a signed 8 bit integer from the buffer at the specified offset.

从这个buffer对象里,根据指定的偏移量,读取一个 signed 8 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Works as buffer.readUInt8, except buffer contents are treated as two's complement signed values.

buffer.readUInt8一样的返回,除非buffer中包含了有作为2的补码的有符号值。

buf.readInt16LE(offset, [noAssert])#

buf.readInt16BE(offset, [noAssert])#

buf.readInt16LE(offset, [noAssert])#

buf.readInt16BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a signed 16 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用特殊的 endian字节序格式读取一个 signed 16 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Works as buffer.readUInt16*, except buffer contents are treated as two's complement signed values.

和 buffer.readUInt16一样返回,除非buffer中包含了有作为2的补码的有符号值。

buf.readInt32LE(offset, [noAssert])#

buf.readInt32BE(offset, [noAssert])#

buf.readInt32LE(offset, [noAssert])#

buf.readInt32BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a signed 32 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 signed 32 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Works as buffer.readUInt32*, except buffer contents are treated as two's complement signed values.

和 buffer.readUInt32一样返回,除非buffer中包含了有作为2的补码的有符号值。

buf.readFloatLE(offset, [noAssert])#

buf.readFloatBE(offset, [noAssert])#

buf.readFloatLE(offset, [noAssert])#

buf.readFloatBE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a 32 bit float from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 32 bit float。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x01

buf.readDoubleLE(offset, [noAssert])#

buf.readDoubleBE(offset, [noAssert])#

buf.readDoubleLE(offset, [noAssert])#

buf.readDoubleBE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a 64 bit double from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 64 bit double。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0.3333333333333333

buf.writeUInt8(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset. Note, value must be a valid unsigned 8 bit integer.

根据指定的offset偏移量将value写入buffer。注意:value 必须是一个合法的unsigned 8 bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer 03 04 23 42>

buf.writeUInt16LE(value, offset, [noAssert])#

buf.writeUInt16BE(value, offset, [noAssert])#

buf.writeUInt16LE(value, offset, [noAssert])#

buf.writeUInt16BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid unsigned 16 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的unsigned 16 bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer de ad be ef>
// <Buffer ad de ef be>

buf.writeUInt32LE(value, offset, [noAssert])#

buf.writeUInt32BE(value, offset, [noAssert])#

buf.writeUInt32LE(value, offset, [noAssert])#

buf.writeUInt32BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid unsigned 32 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的unsigned 32 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer fe ed fa ce>
// <Buffer ce fa ed fe>

buf.writeInt8(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset. Note, value must be a valid signed 8 bit integer.

根据指定的offset偏移量将value写入buffer。注意:value 必须是一个合法的 signed 8 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Works as buffer.writeUInt8, except value is written out as a two's complement signed integer into buffer.

buffer.writeUInt8 一样工作,除非是把有2的补码的 signed integer 有符号整形写入buffer

buf.writeInt16LE(value, offset, [noAssert])#

buf.writeInt16BE(value, offset, [noAssert])#

buf.writeInt16LE(value, offset, [noAssert])#

buf.writeInt16BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid signed 16 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的 signed 16 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Works as buffer.writeUInt16*, except value is written out as a two's complement signed integer into buffer.

buffer.writeUInt16* 一样工作,除非是把有2的补码的 signed integer 有符号整形写入buffer

buf.writeInt32LE(value, offset, [noAssert])#

buf.writeInt32BE(value, offset, [noAssert])#

buf.writeInt32LE(value, offset, [noAssert])#

buf.writeInt32BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid signed 32 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的 signed 32 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Works as buffer.writeUInt32*, except value is written out as a two's complement signed integer into buffer.

buffer.writeUInt32* 一样工作,除非是把有2的补码的 signed integer 有符号整形写入buffer

buf.writeFloatLE(value, offset, [noAssert])#

buf.writeFloatBE(value, offset, [noAssert])#

buf.writeFloatLE(value, offset, [noAssert])#

buf.writeFloatBE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, behavior is unspecified if value is not a 32 bit float.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:当value 不是一个 32 bit float 类型的值时,结果将是不确定的。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer 4f 4a fe bb>
// <Buffer bb fe 4a 4f>

buf.writeDoubleLE(value, offset, [noAssert])#

buf.writeDoubleBE(value, offset, [noAssert])#

buf.writeDoubleLE(value, offset, [noAssert])#

buf.writeDoubleBE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid 64 bit double.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个有效的 64 bit double 类型的值。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer 43 eb d5 b7 dd f9 5f d7>
// <Buffer d7 5f f9 dd b7 d5 eb 43>

buf.fill(value, [offset], [end])#

  • value
  • offset Number, Optional
  • end Number, Optional

  • value

  • offset Number类型, 可选参数
  • end Number类型, 可选参数

Fills the buffer with the specified value. If the offset (defaults to 0) and end (defaults to buffer.length) are not given it will fill the entire buffer.

使用指定的value来填充这个buffer。如果 offset (默认是 0) 并且 end (默认是 buffer.length) 没有明确给出,就会填充整个buffer。 (译者:buf.fill调用的是C语言的memset函数非常高效)

var b = new Buffer(50);
b.fill("h");

buffer.INSPECT_MAX_BYTES#

  • Number, Default: 50

  • Number类型, 默认: 50

How many bytes will be returned when buffer.inspect() is called. This can be overridden by user modules.

设置当调用buffer.inspect()方法后,多少bytes将会返回。这个值可以被用户模块重写。 (译者:这个值主要用在当我们打印console.log(buf)时,设置返回多少长度内容)

Note that this is a property on the buffer module returned by require('buffer'), not on the Buffer global, or a buffer instance.

注意这个属性是require('buffer')模块返回的。这个属性不是在全局变量Buffer中,也不再buffer的实例里。

类: SlowBuffer#

Returns an un-pooled Buffer.

返回一个不被池管理的 Buffer

In order to avoid the garbage collection overhead of creating many individually allocated Buffers, by default allocations under 4KB are sliced from a single larger allocated object. This approach improves both performance and memory usage since v8 does not need to track and cleanup as many Persistent objects.

为了避免创建大量独立分配的 Buffer 带来的垃圾回收开销,默认情况下小于 4KB 的空间都是切割自一个较大的独立对象。这种策略既提高了性能也改善了内存使用,因为 V8 不需要跟踪和清理很多 Persistent 对象。

In the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time it may be appropriate to create an un-pooled Buffer instance using SlowBuffer and copy out the relevant bits.

当开发者需要将池中一小块数据保留不确定的一段时间,较为妥当的办法是用 SlowBuffer 创建一个不被池管理的 Buffer 实例并将相应数据拷贝出来。

socket.on('readable', function() {
  var data = socket.read();
  // 为需要保留的数据分配内存
  var sb = new SlowBuffer(10);
  // 将数据拷贝到新的空间中
  data.copy(sb, 0, 0, 10);
  store.push(sb);
});

Though this should used sparingly and only be a last resort after a developer has actively observed undue memory retention in their applications.

请谨慎使用,仅作为开发者频繁观察到他们的应用中过度的内存保留时的最后手段。

#

稳定度: 2 - 不稳定

A stream is an abstract interface implemented by various objects in Node. For example a request to an HTTP server is a stream, as is stdout. Streams are readable, writable, or both. All streams are instances of EventEmitter

流是一个抽象接口,被 Node 中的很多对象所实现。比如对一个 HTTP 服务器的请求是一个流,stdout 也是一个流。流是可读、可写或兼具两者的。所有流都是 EventEmitter 的实例。

You can load the Stream base classes by doing require('stream'). There are base classes provided for Readable streams, Writable streams, Duplex streams, and Transform streams.

您可以通过 require('stream') 加载 Stream 基类,其中包括了 Readable 流、Writable 流、Duplex 流和 Transform 流的基类。

This document is split up into 3 sections. The first explains the parts of the API that you need to be aware of to use streams in your programs. If you never implement a streaming API yourself, you can stop there.

本文档分为三个章节。第一章节解释了您在您的程序中使用流时需要了解的那部分 API,如果您不打算自己实现一个流式 API,您可以只阅读这一章节。

The second section explains the parts of the API that you need to use if you implement your own custom streams yourself. The API is designed to make this easy for you to do.

第二章节解释了当您自己实现一个流时需要用到的那部分 API,这些 API 是为了方便您这么做而设计的。

The third section goes into more depth about how streams work, including some of the internal mechanisms and functions that you should probably not modify unless you definitely know what you are doing.

第三章节深入讲解了流的工作方式,包括一些内部机制和函数,除非您明确知道您在做什么,否则尽量不要改动它们。

面向流消费者的 API#

Streams can be either Readable, Writable, or both (Duplex).

流可以是可读(Readable)或可写(Writable),或者兼具两者(Duplex,双工)的。

All streams are EventEmitters, but they also have other custom methods and properties depending on whether they are Readable, Writable, or Duplex.

所有流都是 EventEmitter,但它们也具有其它自定义方法和属性,取决于它们是 Readable、Writable 或 Duplex。

If a stream is both Readable and Writable, then it implements all of the methods and events below. So, a Duplex or Transform stream is fully described by this API, though their implementation may be somewhat different.

如果一个流既可读(Readable)也可写(Writable),则它实现了下文所述的所有方法和事件。因此,这些 API 同时也涵盖了 DuplexTransform 流,即便它们的实现可能有点不同。

It is not necessary to implement Stream interfaces in order to consume streams in your programs. If you are implementing streaming interfaces in your own program, please also refer to API for Stream Implementors below.

为了消费流而在您的程序中自己实现 Stream 接口是没有必要的。如果您确实正在您自己的程序中实现流式接口,请同时参考下文面向流实现者的 API

Almost all Node programs, no matter how simple, use Streams in some way. Here is an example of using Streams in a Node program:

几乎所有 Node 程序,无论多简单,都在某种途径用到了流。这里有一个使用流的 Node 程序的例子:

var http = require('http');

var server = http.createServer(function (req, res) {
  // req is an http.IncomingMessage, which is a Readable Stream
  // res is an http.ServerResponse, which is a Writable Stream

var server = http.createServer(function (req, res) {
  // req 为 http.IncomingMessage,是一个可读流(Readable Stream)
  // res 为 http.ServerResponse,是一个可写流(Writable Stream)

  var body = '';
  // we want to get the data as utf8 strings
  // If you don't set an encoding, then you'll get Buffer objects
  req.setEncoding('utf8');

  var body = '';
  // 我们打算以 UTF-8 字符串的形式获取数据
  // 如果您不设置编码,您将得到一个 Buffer 对象
  req.setEncoding('utf8');

  // Readable streams emit 'data' events once a listener is added
  req.on('data', function (chunk) {
    body += chunk;
  })

  // 一旦监听器被添加,可读流会触发 'data' 事件
  req.on('data', function (chunk) {
    body += chunk;
  })

  // the end event tells you that you have entire body
  req.on('end', function () {
    try {
      var data = JSON.parse(body);
    } catch (er) {
      // uh oh!  bad json!
      res.statusCode = 400;
      return res.end('error: ' + er.message);
    }

  // 'end' 事件表明您已经得到了完整的 body
  req.on('end', function () {
    try {
      var data = JSON.parse(body);
    } catch (er) {
      // uh oh!  bad json!
      res.statusCode = 400;
      return res.end('错误: ' + er.message);
    }

    // write back something interesting to the user:
    res.write(typeof data);
    res.end();
  })
})

    // 向用户回写一些有趣的信息
    res.write(typeof data);
    res.end();
  })
})

server.listen(1337);

server.listen(1337);

// $ curl localhost:1337 -d '{}'
// object
// $ curl localhost:1337 -d '"foo"'
// string
// $ curl localhost:1337 -d 'not json'
// 错误: Unexpected token o

类: stream.Readable#

The Readable stream interface is the abstraction for a source of data that you are reading from. In other words, data comes out of a Readable stream.

Readable(可读)流接口是对您正在读取的数据的来源的抽象。换言之,数据出自一个 Readable 流。

A Readable stream will not start emitting data until you indicate that you are ready to receive it.

在您表明您就绪接收之前,Readable 流并不会开始发生数据。

Readable streams have two "modes": a flowing mode and a paused mode. When in flowing mode, data is read from the underlying system and provided to your program as fast as possible. In paused mode, you must explicitly call stream.read() to get chunks of data out. Streams start out in paused mode.

Readable 流有两种“模式”:流动模式暂停模式。当处于流动模式时,数据由底层系统读出,并尽可能快地提供给您的程序;当处于暂停模式时,您必须明确地调用 stream.read() 来取出若干数据块。流默认处于暂停模式。

Note: If no data event handlers are attached, and there are no pipe() destinations, and the stream is switched into flowing mode, then data will be lost.

注意:如果没有绑定 data 事件处理器,并且没有 pipe() 目标,同时流被切换到流动模式,那么数据会流失。

You can switch to flowing mode by doing any of the following:

您可以通过下面几种做法切换到流动模式:

You can switch back to paused mode by doing either of the following:

您可以通过下面其中一种做法切换回暂停模式:

  • If there are no pipe destinations, by calling the pause() method.
  • If there are pipe destinations, by removing any 'data' event handlers, and removing all pipe destinations by calling the unpipe() method.

  • 如果没有导流目标,调用 pause() 方法。

  • 如果有导流目标,移除所有 ['data' 事件][] 处理器、调用 unpipe() 方法移除所有导流目标。

Note that, for backwards compatibility reasons, removing 'data' event handlers will not automatically pause the stream. Also, if there are piped destinations, then calling pause() will not guarantee that the stream will remain paused once those destinations drain and ask for more data.

请注意,为了向后兼容考虑,移除 'data' 事件监听器并不会自动暂停流。同样的,当有导流目标时,调用 pause() 并不能保证流在那些目标排空并请求更多数据时维持暂停状态。

Examples of readable streams include:

一些可读流的例子:

事件: 'readable'#

When a chunk of data can be read from the stream, it will emit a 'readable' event.

当一个数据块可以从流中被读出时,它会触发一个 'readable' 事件。

In some cases, listening for a 'readable' event will cause some data to be read into the internal buffer from the underlying system, if it hadn't already.

在某些情况下,假如未准备好,监听一个 'readable' 事件会使得一些数据从底层系统被读出到内部缓冲区中。

var readable = getReadableStreamSomehow();
readable.on('readable', function() {
  // 现在有数据可以读了
})

Once the internal buffer is drained, a readable event will fire again when more data is available.

当内部缓冲区被排空后,一旦更多数据时,一个 readable 事件会被再次触发。

事件: 'data'#

  • chunk Buffer | String The chunk of data.

  • chunk Buffer | String 数据块。

Attaching a data event listener to a stream that has not been explicitly paused will switch the stream into flowing mode. Data will then be passed as soon as it is available.

绑定一个 data 事件监听器到一个未被明确暂停的流会将流切换到流动模式,数据会被尽可能地传递。

If you just want to get all the data out of the stream as fast as possible, this is the best way to do so.

如果您想从流尽快取出所有数据,这是最理想的方式。

var readable = getReadableStreamSomehow();
readable.on('data', function(chunk) {
  console.log('得到了 %d 字节的数据', chunk.length);
})

事件: 'end'#

This event fires when no more data will be provided.

该事件会在没有更多数据能够提供时被触发。

Note that the end event will not fire unless the data is completely consumed. This can be done by switching into flowing mode, or by calling read() repeatedly until you get to the end.

请注意,end 事件在数据被完全消费之前不会被触发。这可通过切换到流动模式,或者在到达末端前不断调用 read() 来实现。

var readable = getReadableStreamSomehow();
readable.on('data', function(chunk) {
  console.log('得到了 %d 字节的数据', chunk.length);
})
readable.on('end', function() {
  console.log('读取完毕。');
});

事件: 'close'#

Emitted when the underlying resource (for example, the backing file descriptor) has been closed. Not all streams will emit this.

当底层数据源(比如,源头的文件描述符)被关闭时触发。并不是所有流都会触发这个事件。

事件: 'error'#

Emitted if there was an error receiving data.

当数据接收时发生错误时触发。

readable.read([size])#

  • size Number Optional argument to specify how much data to read.
  • Return String | Buffer | null

  • size Number 可选参数,指定要读取多少数据。

  • 返回 String | Buffer | null

The read() method pulls some data out of the internal buffer and returns it. If there is no data available, then it will return null.

read() 方法从内部缓冲区中拉取并返回若干数据。当没有更多数据可用时,它会返回 null

If you pass in a size argument, then it will return that many bytes. If size bytes are not available, then it will return null.

若您传入了一个 size 参数,那么它会返回相当字节的数据;当 size 字节不可用时,它则返回 null

If you do not specify a size argument, then it will return all the data in the internal buffer.

若您没有指定 size 参数,那么它会返回内部缓冲区中的所有数据。

This method should only be called in paused mode. In flowing mode, this method is called automatically until the internal buffer is drained.

该方法仅应在暂停模式时被调用。在流动模式中,该方法会被自动调用直到内部缓冲区排空。

var readable = getReadableStreamSomehow();
readable.on('readable', function() {
  var chunk;
  while (null !== (chunk = readable.read())) {
    console.log('得到了 %d 字节的数据', chunk.length);
  }
});

If this method returns a data chunk, then it will also trigger the emission of a 'data' event.

当该方法返回了一个数据块,它同时也会触发 'data' 事件

readable.setEncoding(encoding)#

  • encoding String The encoding to use.
  • Return: this

  • encoding String 要使用的编码。

  • 返回: this

Call this function to cause the stream to return strings of the specified encoding instead of Buffer objects. For example, if you do readable.setEncoding('utf8'), then the output data will be interpreted as UTF-8 data, and returned as strings. If you do readable.setEncoding('hex'), then the data will be encoded in hexadecimal string format.

调用此函数会使得流返回指定编码的字符串而不是 Buffer 对象。比如,当您 readable.setEncoding('utf8'),那么输出数据会被作为 UTF-8 数据解析,并以字符串返回。如果您 readable.setEncoding('hex'),那么数据会被编码成十六进制字符串格式。

This properly handles multi-byte characters that would otherwise be potentially mangled if you simply pulled the Buffers directly and called buf.toString(encoding) on them. If you want to read the data as strings, always use this method.

该方法能正确处理多字节字符。假如您不这么做,仅仅直接取出 Buffer 并对它们调用 buf.toString(encoding),很可能会导致字节错位。因此如果您打算以字符串读取数据,请总是使用这个方法。

var readable = getReadableStreamSomehow();
readable.setEncoding('utf8');
readable.on('data', function(chunk) {
  assert.equal(typeof chunk, 'string');
  console.log('得到了 %d 个字符的字符串数据', chunk.length);
})

readable.resume()#

  • Return: this

  • 返回: this

This method will cause the readable stream to resume emitting data events.

该方法让可读流继续触发 data 事件。

This method will switch the stream into flowing mode. If you do not want to consume the data from a stream, but you do want to get to its end event, you can call readable.resume() to open the flow of data.

该方法会将流切换到流动模式。如果您不想从流中消费数据,但您得到它的 end 事件,您可以调用 readable.resume() 来启动数据流。

var readable = getReadableStreamSomehow();
readable.resume();
readable.on('end', function(chunk) {
  console.log('到达末端,但并未读取任何东西');
})

readable.pause()#

  • Return: this

  • 返回: this

This method will cause a stream in flowing mode to stop emitting data events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

该方法会使一个处于流动模式的流停止触发 data 事件,切换到非流动模式,并让后续可用数据留在内部缓冲区中。

var readable = getReadableStreamSomehow();
readable.on('data', function(chunk) {
  console.log('取得 %d 字节数据', chunk.length);
  readable.pause();
  console.log('接下来 1 秒内不会有数据');
  setTimeout(function() {
    console.log('现在数据会再次开始流动');
    readable.resume();
  }, 1000);
})

readable.pipe(destination, [options])#

  • destination Writable Stream The destination for writing data
  • options Object Pipe options

    • end Boolean End the writer when the reader ends. Default = true
  • destination Writable Stream 写入数据的目标

  • options Object 导流选项
    • end Boolean 在读取者结束时结束写入者。缺省为 true

This method pulls all the data out of a readable stream, and writes it to the supplied destination, automatically managing the flow so that the destination is not overwhelmed by a fast readable stream.

该方法从可读流中拉取所有数据,并写入到所提供的目标。该方法能自动控制流量以避免目标被快速读取的可读流所淹没。

Multiple destinations can be piped to safely.

可以导流到多个目标。

var readable = getReadableStreamSomehow();
var writable = fs.createWriteStream('file.txt');
// 所有来自 readable 的数据会被写入到 'file.txt'
readable.pipe(writable);

This function returns the destination stream, so you can set up pipe chains like so:

该函数返回目标流,因此您可以建立导流链:

var r = fs.createReadStream('file.txt');
var z = zlib.createGzip();
var w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);

For example, emulating the Unix cat command:

例如,模拟 Unix 的 cat 命令:

process.stdin.pipe(process.stdout);

By default end() is called on the destination when the source stream emits end, so that destination is no longer writable. Pass { end: false } as options to keep the destination stream open.

缺省情况下当来源流触发 end 时目标的 end() 会被调用,所以此时 destination 不再可写。传入 { end: false } 作为 options 可以让目标流保持开启状态。

This keeps writer open so that "Goodbye" can be written at the end.

这将让 writer 保持开启,因此最后可以写入 "Goodbye"。

reader.pipe(writer, { end: false });
reader.on('end', function() {
  writer.end('Goodbye\n');
});

Note that process.stderr and process.stdout are never closed until the process exits, regardless of the specified options.

请注意 process.stderrprocess.stdout 在进程结束前都不会被关闭,无论是否指定选项。

readable.unpipe([destination])#

  • destination Writable Stream Optional specific stream to unpipe

  • destination Writable Stream 可选,指定解除导流的流

This method will remove the hooks set up for a previous pipe() call.

该方法会移除之前调用 pipe() 所设定的钩子。

If the destination is not specified, then all pipes are removed.

如果不指定目标,所有导流都会被移除。

If the destination is specified, but no pipe is set up for it, then this is a no-op.

如果指定了目标,但并没有与之建立导流,则什么事都不会发生。

var readable = getReadableStreamSomehow();
var writable = fs.createWriteStream('file.txt');
// 来自 readable 的所有数据都会被写入 'file.txt',
// 但仅发生在第 1 秒
readable.pipe(writable);
setTimeout(function() {
  console.log('停止写入到 file.txt');
  readable.unpipe(writable);
  console.log('自行关闭文件流');
  writable.end();
}, 1000);

readable.unshift(chunk)#

  • chunk Buffer | String Chunk of data to unshift onto the read queue

  • chunk Buffer | String 要插回读取队列开头的数据块

This is useful in certain cases where a stream is being consumed by a parser, which needs to "un-consume" some data that it has optimistically pulled out of the source, so that the stream can be passed on to some other party.

该方法在许多场景中都很有用,比如一个流正在被一个解析器消费,解析器可能需要将某些刚拉取出的数据“逆消费”回来源,以便流能将它传递给其它消费者。

If you find that you must often call stream.unshift(chunk) in your programs, consider implementing a Transform stream instead. (See API for Stream Implementors, below.)

如果您发现您需要在您的程序中频繁调用 stream.unshift(chunk),请考虑实现一个 Transform 流。(详见下文面向流实现者的 API。)

// 取出以 \n\n 分割的头部并将多余部分 unshift() 回去
// callback 以 (error, header, stream) 形式调用
var StringDecoder = require('string_decoder').StringDecoder;
function parseHeader(stream, callback) {
  stream.on('error', callback);
  stream.on('readable', onReadable);
  var decoder = new StringDecoder('utf8');
  var header = '';
  function onReadable() {
    var chunk;
    while (null !== (chunk = stream.read())) {
      var str = decoder.write(chunk);
      if (str.match(/\n\n/)) {
        // 找到头部边界
        var split = str.split(/\n\n/);
        header += split.shift();
        var remaining = split.join('\n\n');
        var buf = new Buffer(remaining, 'utf8');
        if (buf.length)
          stream.unshift(buf);
        stream.removeListener('error', callback);
        stream.removeListener('readable', onReadable);
        // 现在可以从流中读取消息的主体了
        callback(null, header, stream);
      } else {
        // 仍在读取头部
        header += str;
      }
    }
  }
}

readable.wrap(stream)#

  • stream Stream An "old style" readable stream

  • stream Stream 一个“旧式”可读流

Versions of Node prior to v0.10 had streams that did not implement the entire Streams API as it is today. (See "Compatibility" below for more information.)

Node v0.10 版本之前的流并未实现现今所有流 API。(更多信息详见下文“兼容性”章节。)

If you are using an older Node library that emits 'data' events and has a pause() method that is advisory only, then you can use the wrap() method to create a Readable stream that uses the old stream as its data source.

如果您正在使用早前版本的 Node 库,它触发 'data' 事件并且有一个仅作查询用途的 pause() 方法,那么您可以使用 wrap() 方法来创建一个使用旧式流作为数据源的 Readable 流。

You will very rarely ever need to call this function, but it exists as a convenience for interacting with old Node programs and libraries.

您可能很少需要用到这个函数,但它会作为与旧 Node 程序和库交互的简便方法存在。

For example:

例如:

myReader.on('readable', function() {
myReader.read(); // etc.
});

类: stream.Writable#

The Writable stream interface is an abstraction for a destination that you are writing data to.

Writable(可写)流接口是对您正在写入数据至一个目标的抽象。

Examples of writable streams include:

一些可写流的例子:

writable.write(chunk, [encoding], [callback])#

  • chunk String | Buffer The data to write
  • encoding String The encoding, if chunk is a String
  • callback Function Callback for when this chunk of data is flushed
  • Returns: Boolean True if the data was handled completely.
  • chunk {String | Buffer} 要写入的数据
  • encoding {String} 编码,假如 chunk 是一个字符串
  • callback {Function} 数据块写入后的回调
  • 返回: {Boolean} 如果数据已被全部处理则 true

This method writes some data to the underlying system, and calls the supplied callback once the data has been fully handled.

该方法向底层系统写入数据,并在数据被处理完毕后调用所给的回调。

The return value indicates if you should continue writing right now. If the data had to be buffered internally, then it will return false. Otherwise, it will return true.

返回值表明您是否应该立即继续写入。如果数据需要滞留在内部,则它会返回 false;否则,返回 true

This return value is strictly advisory. You MAY continue to write, even if it returns false. However, writes will be buffered in memory, so it is best not to do this excessively. Instead, wait for the drain event before writing more data.

返回值所表示的状态仅供参考,您【可以】在即便返回 false 的时候继续写入。但是,写入的数据会被滞留在内存中,所以最好不要过分地这么做。最好的做法是等待 drain 事件发生后再继续写入更多数据。

事件: 'drain'#

If a writable.write(chunk) call returns false, then the drain event will indicate when it is appropriate to begin writing more data to the stream.

如果一个 writable.write(chunk) 调用返回 false,那么 drain 事件则表明可以继续向流写入更多数据。

// 向所给可写流写入 1000000 次数据。
// 注意后端压力。
function writeOneMillionTimes(writer, data, encoding, callback) {
  var i = 1000000;
  write();
  function write() {
    var ok = true;
    do {
      i -= 1;
      if (i === 0) {
        // 最后一次!
        writer.write(data, encoding, callback);
      } else {
        // 检查我们应该继续还是等待
        // 不要传递回调,因为我们还没完成。
        ok = writer.write(data, encoding);
      }
    } while (i > 0 && ok);
    if (i > 0) {
      // 不得不提前停止!
      // 一旦它排空,继续写入数据
      writer.once('drain', write);
    }
  }
}

writable.cork()#

Forces buffering of all writes.

强行滞留所有写入。

Buffered data will be flushed either at .uncork() or at .end() call.

滞留的数据会在 .uncork().end() 调用时被写入。

writable.uncork()#

Flush all data, buffered since .cork() call.

写入所有 .cork() 调用之后滞留的数据。

writable.end([chunk], [encoding], [callback])#

  • chunk String | Buffer Optional data to write
  • encoding String The encoding, if chunk is a String
  • callback Function Optional callback for when the stream is finished

  • chunk String | Buffer 可选,要写入的数据

  • encoding String 编码,假如 chunk 是一个字符串
  • callback Function 可选,流结束后的回调

Call this method when no more data will be written to the stream. If supplied, the callback is attached as a listener on the finish event.

当没有更多数据会被写入到流时调用此方法。如果给出,回调会被用作 finish 事件的监听器。

Calling write() after calling end() will raise an error.

在调用 end() 后调用 write() 会产生错误。

// 写入 'hello, ' 然后以 'world!' 结束
http.createServer(function (req, res) {
  res.write('hello, ');
  res.end('world!');
  // 现在不允许继续写入了
});

事件: 'finish'#

When the end() method has been called, and all data has been flushed to the underlying system, this event is emitted.

end() 方法被调用,并且所有数据已被写入到底层系统,此事件会被触发。

var writer = getWritableStreamSomehow();
for (var i = 0; i < 100; i ++) {
  writer.write('hello, #' + i + '!\n');
}
writer.end('this is the end\n');
write.on('finish', function() {
  console.error('已完成所有写入。');
});

事件: 'pipe'#

  • src Readable Stream source stream that is piping to this writable

  • src Readable Stream 导流到本可写流的来源流

This is emitted whenever the pipe() method is called on a readable stream, adding this writable to its set of destinations.

该事件发生于可读流的 pipe() 方法被调用并添加本可写流作为它的目标时。

var writer = getWritableStreamSomehow();
var reader = getReadableStreamSomehow();
writer.on('pipe', function(src) {
  console.error('某些东西正被导流到 writer');
  assert.equal(src, reader);
});
reader.pipe(writer);

事件: 'unpipe'#

This is emitted whenever the unpipe() method is called on a readable stream, removing this writable from its set of destinations.

该事件发生于可读流的 unpipe() 方法被调用并将本可写流从它的目标移除时。

var writer = getWritableStreamSomehow();
var reader = getReadableStreamSomehow();
writer.on('unpipe', function(src) {
  console.error('某写东西停止导流到 writer 了');
  assert.equal(src, reader);
});
reader.pipe(writer);
reader.unpipe(writer);

类: stream.Duplex#

Duplex streams are streams that implement both the Readable and Writable interfaces. See above for usage.

双工(Duplex)流同时实现了 ReadableWritable 的接口。详见下文用例。

Examples of Duplex streams include:

一些双工流的例子:

类: stream.Transform#

Transform streams are Duplex streams where the output is in some way computed from the input. They implement both the Readable and Writable interfaces. See above for usage.

转换(Transform)流是一种输出由输入计算所得的双工流。它们同时实现了 ReadableWritable 的接口。详见下文用例。

Examples of Transform streams include:

一些转换流的例子:

面向流实现者的 API#

To implement any sort of stream, the pattern is the same:

无论实现任何形式的流,模式都是一样的:

  1. Extend the appropriate parent class in your own subclass. (The util.inherits method is particularly helpful for this.)
  2. Call the appropriate parent class constructor in your constructor, to be sure that the internal mechanisms are set up properly.
  3. Implement one or more specific methods, as detailed below.

  4. 在您的子类中扩充适合的父类。(util.inherits 方法对此很有帮助。)

  5. 在您的构造函数中调用父类的构造函数,以确保内部的机制被正确初始化。
  6. 实现一个或多个特定的方法,参见下面的细节。

The class to extend and the method(s) to implement depend on the sort of stream class you are writing:

所扩充的类和要实现的方法取决于您要编写的流类的形式:

Use-case

Class

Method(s) to implement

Reading only

Readable

_read

Writing only

Writable

_write

Reading and writing

Duplex

_read, _write

Operate on written data, then read the result

Transform

_transform, _flush

使用情景

要实现的方法

只读

Readable

_read

只写

Writable

_write

读写

Duplex

_read, _write

操作被写入数据,然后读出结果

Transform

_transform, _flush

In your implementation code, it is very important to never call the methods described in API for Stream Consumers above. Otherwise, you can potentially cause adverse side effects in programs that consume your streaming interfaces.

在您的实现代码中,十分重要的一点是绝对不要调用上文面向流消费者的 API 中所描述的方法,否则可能在消费您的流接口的程序中产生潜在的副作用。

类: stream.Readable#

stream.Readable is an abstract class designed to be extended with an underlying implementation of the _read(size) method.

stream.Readable 是一个可被扩充的、实现了底层方法 _read(size) 的抽象类。

Please see above under API for Stream Consumers for how to consume streams in your programs. What follows is an explanation of how to implement Readable streams in your programs.

请阅读前文面向流消费者的 API 章节了解如何在您的程序中消费流。文将解释如何在您的程序中自己实现 Readable 流。

例子: 一个计数流#

This is a basic example of a Readable stream. It emits the numerals from 1 to 1,000,000 in ascending order, and then ends.

这是一个 Readable 流的基本例子。它将从 1 至 1,000,000 递增地触发数字,然后结束。

var Readable = require('stream').Readable;
var util = require('util');
util.inherits(Counter, Readable);

function Counter(opt) {
  Readable.call(this, opt);
  this._max = 1000000;
  this._index = 1;
}

function Counter(opt) {
  Readable.call(this, opt);
  this._max = 1000000;
  this._index = 1;
}

Counter.prototype._read = function() {
  var i = this._index++;
  if (i > this._max)
    this.push(null);
  else {
    var str = '' + i;
    var buf = new Buffer(str, 'ascii');
    this.push(buf);
  }
};

例子: SimpleProtocol v1 (Sub-optimal)#

This is similar to the parseHeader function described above, but implemented as a custom stream. Also, note that this implementation does not convert the incoming data to a string.

这个有点类似上文提到的 parseHeader 函数,但它被实现成一个自定义流。同样地,请注意这个实现并未将传入数据转换成字符串。

However, this would be better implemented as a Transform stream. See below for a better implementation.

实际上,更好的办法是将它实现成一个 Transform 流。更好的实现详见下文。

// 简易数据协议的解析器。
// “header”是一个 JSON 对象,后面紧跟 2 个 \n 字符,以及
// 消息主体。
//
// 注意: 使用 Transform 流能更简单地实现这个功能!
// 直接使用 Readable 并不是最佳方式,详见 Transform
// 章节下的备选例子。

var Readable = require('stream').Readable;
var util = require('util');

var Readable = require('stream').Readable;
var util = require('util');

util.inherits(SimpleProtocol, Readable);

util.inherits(SimpleProtocol, Readable);

function SimpleProtocol(source, options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

function SimpleProtocol(source, options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

  Readable.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;

  Readable.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;

  // source is a readable stream, such as a socket or file
  this._source = source;

  // source 是一个可读流,比如嵌套字或文件
  this._source = source;

  var self = this;
  source.on('end', function() {
    self.push(null);
  });

  var self = this;
  source.on('end', function() {
    self.push(null);
  });

  // give it a kick whenever the source is readable
  // read(0) will not consume any bytes
  source.on('readable', function() {
    self.read(0);
  });

  // 当 source 可读时做点什么
  // read(0) 不会消费任何字节
  source.on('readable', function() {
    self.read(0);
  });

  this._rawHeader = [];
  this.header = null;
}

  this._rawHeader = [];
  this.header = null;
}

SimpleProtocol.prototype._read = function(n) {
  if (!this._inBody) {
    var chunk = this._source.read();

SimpleProtocol.prototype._read = function(n) {
  if (!this._inBody) {
    var chunk = this._source.read();

    if (split === -1) {
      // 继续等待 \n\n
      // 暂存数据块,并再次尝试
      this._rawHeader.push(chunk);
      this.push('');
    } else {
      this._inBody = true;
      var h = chunk.slice(0, split);
      this._rawHeader.push(h);
      var header = Buffer.concat(this._rawHeader).toString();
      try {
        this.header = JSON.parse(header);
      } catch (er) {
        this.emit('error', new Error('invalid simple protocol data'));
        return;
      }
      // 现在,我们得到了一些多余的数据,所以需要 unshift
      // 将多余的数据放回读取队列以便我们的消费者能够读取
      var b = chunk.slice(split);
      this.unshift(b);

      // and let them know that we are done parsing the header.
      this.emit('header', this.header);
    }
  } else {
    // from there on, just provide the data to our consumer.
    // careful not to push(null), since that would indicate EOF.
    var chunk = this._source.read();
    if (chunk) this.push(chunk);
  }
};

      // 并让它们知道我们完成了头部解析。
      this.emit('header', this.header);
    }
  } else {
    // 从现在开始,仅需向我们的消费者提供数据。
    // 注意不要 push(null),因为它表明 EOF。
    var chunk = this._source.read();
    if (chunk) this.push(chunk);
  }
};

// 用法:
// var parser = new SimpleProtocol(source);
// 现在 parser 是一个会触发 'header' 事件并提供已解析
// 的头部的可读流。

new stream.Readable([options])#

  • options Object

    • highWaterMark Number The maximum number of bytes to store in the internal buffer before ceasing to read from the underlying resource. Default=16kb, or 16 for objectMode streams
    • encoding String If specified, then buffers will be decoded to strings using the specified encoding. Default=null
    • objectMode Boolean Whether this stream should behave as a stream of objects. Meaning that stream.read(n) returns a single value instead of a Buffer of size n
  • options Object

    • highWaterMark Number 停止从底层资源读取前内部缓冲区最多能存放的字节数。缺省为 16kb,对于 objectMode 流则是 16
    • encoding String 若给出,则 Buffer 会被解码成所给编码的字符串。缺省为 null
    • objectMode Boolean 该流是否应该表现为对象的流。意思是说 stream.read(n) 返回一个单独的对象,而不是大小为 n 的 Buffer

In classes that extend the Readable class, make sure to call the Readable constructor so that the buffering settings can be properly initialized.

请确保在扩充 Readable 类的类中调用 Readable 构造函数以便缓冲设定能被正确初始化。

readable._read(size)#

  • size Number Number of bytes to read asynchronously

  • size Number 异步读取的字节数

Note: Implement this function, but do NOT call it directly.

注意:实现这个函数,但【不要】直接调用它。

This function should NOT be called directly. It should be implemented by child classes, and only called by the internal Readable class methods.

这个函数【不应该】被直接调用。它应该被子类所实现,并仅被 Readable 类内部方法所调用。

All Readable stream implementations must provide a _read method to fetch data from the underlying resource.

所有 Readable 流的实现都必须提供一个 _read 方法来从底层资源抓取数据。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

When data is available, put it into the read queue by calling readable.push(chunk). If push returns false, then you should stop reading. When _read is called again, you should start pushing more data.

当数据可用时,调用 readable.push(chunk) 将它加入到读取队列。如果 push 返回 false,那么您应该停止读取。当 _read 被再次调用,您应该继续推出更多数据。

The size argument is advisory. Implementations where a "read" is a single call that returns data can use this to know how much data to fetch. Implementations where that is not relevant, such as TCP or TLS, may ignore this argument, and simply provide data whenever it becomes available. There is no need, for example to "wait" until size bytes are available before calling stream.push(chunk).

参数 size 仅作查询。“read”调用返回数据的实现可以通过这个参数来知道应当抓取多少数据;其余与之无关的实现,比如 TCP 或 TLS,则可忽略这个参数,并在可用时返回数据。例如,没有必要“等到” size 个字节可用时才调用 stream.push(chunk)

readable.push(chunk, [encoding])#

  • chunk Buffer | null | String Chunk of data to push into the read queue
  • encoding String Encoding of String chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'
  • return Boolean Whether or not more pushes should be performed

  • chunk Buffer | null | String 推入读取队列的数据块

  • encoding String 字符串块的编码。必须是有效的 Buffer 编码,比如 utf8ascii
  • 返回 Boolean 是否应该继续推入

Note: This function should be called by Readable implementors, NOT by consumers of Readable streams.

注意:这个函数应该被 Readable 实现者调用,【而不是】Readable 流的消费者。

The _read() function will not be called again until at least one push(chunk) call is made.

函数 _read() 不会被再次调用,直到至少调用了一次 push(chunk)

The Readable class works by putting data into a read queue to be pulled out later by calling the read() method when the 'readable' event fires.

Readable 类的工作方式是,将数据读入一个队列,当 'readable' 事件发生、调用 read() 方法时,数据会被从队列中取出。

The push() method will explicitly insert some data into the read queue. If it is called with null then it will signal the end of the data (EOF).

push() 方法会明确地向读取队列中插入一些数据。如果调用它时传入了 null 参数,那么它会触发数据结束信号(EOF)。

This API is designed to be as flexible as possible. For example, you may be wrapping a lower-level source which has some sort of pause/resume mechanism, and a data callback. In those cases, you could wrap the low-level source object by doing something like this:

这个 API 被设计成尽可能地灵活。比如说,您可以包装一个低级别的具备某种暂停/恢复机制和数据回调的数据源。这种情况下,您可以通过这种方式包装低级别来源对象:

// source 是一个带 readStop() 和 readStart() 方法的类,
// 以及一个当有数据时会被调用的 `ondata` 成员、一个
// 当数据结束时会被调用的 `onend` 成员。

util.inherits(SourceWrapper, Readable);

util.inherits(SourceWrapper, Readable);

function SourceWrapper(options) {
  Readable.call(this, options);

function SourceWrapper(options) {
  Readable.call(this, options);

  this._source = getLowlevelSourceObject();
  var self = this;

  this._source = getLowlevelSourceObject();
  var self = this;

  // Every time there's data, we push it into the internal buffer.
  this._source.ondata = function(chunk) {
    // if push() returns false, then we need to stop reading from source
    if (!self.push(chunk))
      self._source.readStop();
  };

  // 每当有数据时,我们将它推入到内部缓冲区中
  this._source.ondata = function(chunk) {
    // 如果 push() 返回 false,我们就需要暂停读取 source
    if (!self.push(chunk))
      self._source.readStop();
  };

  // When the source ends, we push the EOF-signalling `null` chunk
  this._source.onend = function() {
    self.push(null);
  };
}

  // 当来源结束时,我们 push 一个 `null` 块以表示 EOF
  this._source.onend = function() {
    self.push(null);
  };
}

// _read 会在流想要拉取更多数据时被调用
// 本例中忽略 size 参数
SourceWrapper.prototype._read = function(size) {
  this._source.readStart();
};

类: stream.Writable#

stream.Writable is an abstract class designed to be extended with an underlying implementation of the _write(chunk, encoding, callback) method.

stream.Writable 是一个可被扩充的、实现了底层方法 _write(chunk, encoding, callback) 的抽象类。

Please see above under API for Stream Consumers for how to consume writable streams in your programs. What follows is an explanation of how to implement Writable streams in your programs.

请阅读前文面向流消费者的 API 章节了解如何在您的程序中消费可读流。下文将解释如何在您的程序中自己实现 Writable 流。

new stream.Writable([options])#

  • options Object

    • highWaterMark Number Buffer level when write() starts returning false. Default=16kb, or 16 for objectMode streams
    • decodeStrings Boolean Whether or not to decode strings into Buffers before passing them to _write(). Default=true
  • options Object

    • highWaterMark Number write() 开始返回 false 的缓冲级别。缺省为 16kb,对于 objectMode 流则是 16
    • decodeStrings Boolean 是否在传递给 _write() 前将字符串解码成 Buffer。缺省为 true

In classes that extend the Writable class, make sure to call the constructor so that the buffering settings can be properly initialized.

请确保在扩充 Writable 类的类中调用构造函数以便缓冲设定能被正确初始化。

writable._write(chunk, encoding, callback)#

  • chunk Buffer | String The chunk to be written. Will always be a buffer unless the decodeStrings option was set to false.
  • encoding String If the chunk is a string, then this is the encoding type. Ignore if chunk is a buffer. Note that chunk will always be a buffer unless the decodeStrings option is explicitly set to false.
  • callback Function Call this function (optionally with an error argument) when you are done processing the supplied chunk.

  • chunk Buffer | String 要被写入的数据块。总会是一个 Buffer,除非 decodeStrings 选项被设定为 false

  • encoding String 如果数据块是字符串,则这里指定它的编码类型。如果数据块是 Buffer 则忽略此设定。请注意数据块总会是一个 Buffer,除非 decodeStrings 选项被明确设定为 false
  • callback Function 当您处理完所给数据块时调用此函数(可选地可附上一个错误参数)。

All Writable stream implementations must provide a _write() method to send data to the underlying resource.

所有 Writable 流的实现必须提供一个 _write() 方法来将数据发送到底层资源。

Note: This function MUST NOT be called directly. It should be implemented by child classes, and called by the internal Writable class methods only.

注意:该函数【禁止】被直接调用。它应该被子类所实现,并仅被 Writable 内部方法所调用。

Call the callback using the standard callback(error) pattern to signal that the write completed successfully or with an error.

使用标准的 callback(error) 形式来调用回调以表明写入成功完成或遇到错误。

If the decodeStrings flag is set in the constructor options, then chunk may be a string rather than a Buffer, and encoding will indicate the sort of string that it is. This is to support implementations that have an optimized handling for certain string data encodings. If you do not explicitly set the decodeStrings option to false, then you can safely ignore the encoding argument, and assume that chunk will always be a Buffer.

如果构造函数选项中设定了 decodeStrings 标志,则 chunk 可能会是字符串而不是 Buffer,并且 encoding 表明了字符串的格式。这种设计是为了支持对某些字符串数据编码提供优化处理的实现。如果您没有明确地将 decodeStrings 选项设定为 false,那么您可以安全地忽略 encoding 参数,并假定 chunk 总是一个 Buffer。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

writable._writev(chunks, callback)#

  • chunks Array The chunks to be written. Each chunk has following format: <span class="type"> chunk: ..., encoding: ... </span>.
  • callback Function Call this function (optionally with an error argument) when you are done processing the supplied chunks.

  • chunks Array 要写入的块。每个块都遵循这种格式:{ chunk: ..., encoding: ... }

  • callback Function 当您处理完所给数据块时调用此函数(可选地可附上一个错误参数)。

Note: This function MUST NOT be called directly. It may be implemented by child classes, and called by the internal Writable class methods only.

注意:该函数【禁止】被直接调用。它应该被子类所实现,并仅被 Writable 内部方法所调用。

This function is completely optional to implement. In most cases it is unnecessary. If implemented, it will be called with all the chunks that are buffered in the write queue.

该函数的实现完全是可选的,在大多数情况下都是不必要的。如果实现,它会被以所有滞留在写入队列中的数据块调用。

类: stream.Duplex#

A "duplex" stream is one that is both Readable and Writable, such as a TCP socket connection.

“双工”(duplex)流同时兼具可读和可写特性,比如一个 TCP 嵌套字连接。

Note that stream.Duplex is an abstract class designed to be extended with an underlying implementation of the _read(size) and _write(chunk, encoding, callback) methods as you would with a Readable or Writable stream class.

值得注意的是,stream.Duplex 是一个可以像 Readable 或 Writable 一样被扩充、实现了底层方法 _read(sise)_write(chunk, encoding, callback) 的抽象类。

Since JavaScript doesn't have multiple prototypal inheritance, this class prototypally inherits from Readable, and then parasitically from Writable. It is thus up to the user to implement both the lowlevel _read(n) method as well as the lowlevel _write(chunk, encoding, callback) method on extension duplex classes.

由于 JavaScript 并不具备多原型继承能力,这个类实际上继承自 Readable,并寄生自 Writable,从而让用户在双工类的扩充中能同时实现低级别的 _read(n) 方法和 _write(chunk, encoding, callback) 方法。

new stream.Duplex(options)#

  • options Object Passed to both Writable and Readable constructors. Also has the following fields:

    • allowHalfOpen Boolean Default=true. If set to false, then the stream will automatically end the readable side when the writable side ends and vice versa.
  • options Object Passed to both Writable and Readable constructors. Also has the following fields:

    • allowHalfOpen Boolean Default=true. If set to false, then the stream will automatically end the readable side when the writable side ends and vice versa.

In classes that extend the Duplex class, make sure to call the constructor so that the buffering settings can be properly initialized.

请确保在扩充 Duplex 类的类中调用构造函数以便缓冲设定能被正确初始化。

类: stream.Transform#

A "transform" stream is a duplex stream where the output is causally connected in some way to the input, such as a zlib stream or a crypto stream.

“转换”(transform)流实际上是一个输出与输入存在因果关系的双工流,比如 zlib 流或 crypto 流。

There is no requirement that the output be the same size as the input, the same number of chunks, or arrive at the same time. For example, a Hash stream will only ever have a single chunk of output which is provided when the input is ended. A zlib stream will produce output that is either much smaller or much larger than its input.

输入和输出并无要求相同大小、相同块数或同时到达。举个例子,一个 Hash 流只会在输入结束时产生一个数据块的输出;一个 zlib 流会产生比输入小得多或大得多的输出。

Rather than implement the _read() and _write() methods, Transform classes must implement the _transform() method, and may optionally also implement the _flush() method. (See below.)

转换类必须实现 _transform() 方法,而不是 _read()_write() 方法。可选的,也可以实现 _flush() 方法。(详见下文。)

new stream.Transform([options])#

  • options Object Passed to both Writable and Readable constructors.

  • options Object 传递给 Writable 和 Readable 构造函数。

In classes that extend the Transform class, make sure to call the constructor so that the buffering settings can be properly initialized.

请确保在扩充 Transform 类的类中调用了构造函数,以使得缓冲设定能被正确初始化。

transform._transform(chunk, encoding, callback)#

  • chunk Buffer | String The chunk to be transformed. Will always be a buffer unless the decodeStrings option was set to false.
  • encoding String If the chunk is a string, then this is the encoding type. (Ignore if decodeStrings chunk is a buffer.)
  • callback Function Call this function (optionally with an error argument) when you are done processing the supplied chunk.

  • chunk Buffer | String 要被转换的数据块。总是 Buffer,除非 decodeStrings 选项被设定为 false

  • encoding String 如果数据块是一个字符串,那么这就是它的编码类型。(数据块是 Buffer 则会忽略此参数。)
  • callback Function 当您处理完所提供的数据块时调用此函数(可选地附上一个错误参数)。

Note: This function MUST NOT be called directly. It should be implemented by child classes, and called by the internal Transform class methods only.

注意:该函数【禁止】被直接调用。它应该被子类所实现,并仅被 Transform 内部方法所调用。

All Transform stream implementations must provide a _transform method to accept input and produce output.

所有转换流的实现都必须提供一个 _transform 方法来接受输入并产生输出。

_transform should do whatever has to be done in this specific Transform class, to handle the bytes being written, and pass them off to the readable portion of the interface. Do asynchronous I/O, process things, and so on.

_transform 应当承担特定 Transform 类中所有处理被写入的字节、并将它们丢给接口的可写端的职责,进行异步 I/O,处理其它事情等等。

Call transform.push(outputChunk) 0 or more times to generate output from this input chunk, depending on how much data you want to output as a result of this chunk.

调用 transform.push(outputChunk) 0 或多次来从输入块生成输出,取决于您想从这个数据块输出多少数据。

Call the callback function only when the current chunk is completely consumed. Note that there may or may not be output as a result of any particular input chunk.

仅当当前数据块被完全消费时调用回调函数。注意,任何特定的输入块都有可能或可能不会产生输出。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

transform._flush(callback)#

  • callback Function Call this function (optionally with an error argument) when you are done flushing any remaining data.

  • callback Function 当您写入完毕剩下的数据后调用此函数(可选地可附上一个错误对象)。

Note: This function MUST NOT be called directly. It MAY be implemented by child classes, and if so, will be called by the internal Transform class methods only.

注意:该函数【禁止】被直接调用。它【可以】被子类所实现,并且如果实现,仅被 Transform 内部方法所调用。

In some cases, your transform operation may need to emit a bit more data at the end of the stream. For example, a Zlib compression stream will store up some internal state so that it can optimally compress the output. At the end, however, it needs to do the best it can with what is left, so that the data will be complete.

在一些情景中,您的转换操作可能需要在流的末尾多发生一点点数据。例如,一个 Zlib 压缩流会储存一些内部状态以便更好地压缩输出,但在最后它需要尽可能好地处理剩下的东西以使数据完整。

In those cases, you can implement a _flush method, which will be called at the very end, after all the written data is consumed, but before emitting end to signal the end of the readable side. Just like with _transform, call transform.push(chunk) zero or more times, as appropriate, and call callback when the flush operation is complete.

在这种情况中,您可以实现一个 _flush 方法,它会在最后被调用,在所有写入数据被消费、但在触发 end 表示可读端到达末尾之前。和 _transform 一样,只需在写入操作完成时适当地调用 transform.push(chunk) 零或多次。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

例子: SimpleProtocol 解析器 v2#

The example above of a simple protocol parser can be implemented simply by using the higher level Transform stream class, similar to the parseHeader and SimpleProtocol v1 examples above.

上文的简易协议解析器例子能够很简单地使用高级别 Transform 流类实现,类似于前文 parseHeaderSimpleProtocal v1 示例。

In this example, rather than providing the input as an argument, it would be piped into the parser, which is a more idiomatic Node stream approach.

在这个示例中,输入会被导流到解析器中,而不是作为参数提供。这种做法更符合 Node 流的惯例。

var util = require('util');
var Transform = require('stream').Transform;
util.inherits(SimpleProtocol, Transform);

function SimpleProtocol(options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

function SimpleProtocol(options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

  Transform.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;
  this._rawHeader = [];
  this.header = null;
}

  Transform.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;
  this._rawHeader = [];
  this.header = null;
}

SimpleProtocol.prototype._transform = function(chunk, encoding, done) {
  if (!this._inBody) {
    // check if the chunk has a \n\n
    var split = -1;
    for (var i = 0; i < chunk.length; i++) {
      if (chunk[i] === 10) { // '\n'
        if (this._sawFirstCr) {
          split = i;
          break;
        } else {
          this._sawFirstCr = true;
        }
      } else {
        this._sawFirstCr = false;
      }
    }

SimpleProtocol.prototype._transform = function(chunk, encoding, done) {
  if (!this._inBody) {
    // 检查数据块是否有 \n\n
    var split = -1;
    for (var i = 0; i < chunk.length; i++) {
      if (chunk[i] === 10) { // '\n'
        if (this._sawFirstCr) {
          split = i;
          break;
        } else {
          this._sawFirstCr = true;
        }
      } else {
        this._sawFirstCr = false;
      }
    }

    if (split === -1) {
      // 仍旧等待 \n\n
      // 暂存数据块并重试。
      this._rawHeader.push(chunk);
    } else {
      this._inBody = true;
      var h = chunk.slice(0, split);
      this._rawHeader.push(h);
      var header = Buffer.concat(this._rawHeader).toString();
      try {
        this.header = JSON.parse(header);
      } catch (er) {
        this.emit('error', new Error('invalid simple protocol data'));
        return;
      }
      // 并让它们知道我们完成了头部解析。
      this.emit('header', this.header);

      // now, because we got some extra data, emit this first.
      this.push(chunk.slice(split));
    }
  } else {
    // from there on, just provide the data to our consumer as-is.
    this.push(chunk);
  }
  done();
};

      // 现在,由于我们获得了一些额外的数据,先触发这个。
      this.push(chunk.slice(split));
    }
  } else {
    // 之后,仅需向我们的消费者原样提供数据。
    this.push(chunk);
  }
  done();
};

// 用法:
// var parser = new SimpleProtocol();
// source.pipe(parser)
// 现在 parser 是一个会触发 'header' 并带上解析后的
// 头部数据的可读流。

类: stream.PassThrough#

This is a trivial implementation of a Transform stream that simply passes the input bytes across to the output. Its purpose is mainly for examples and testing, but there are occasionally use cases where it can come in handy as a building block for novel sorts of streams.

这是 Transform 流的一个简单实现,将输入的字节简单地传递给输出。它的主要用途是演示和测试,但偶尔要构建某种特殊流的时候也能派上用场。

流:内部细节#

缓冲#

Both Writable and Readable streams will buffer data on an internal object called _writableState.buffer or _readableState.buffer, respectively.

无论 Writable 或 Readable 流都会在内部分别叫做 _writableState.buffer_readableState.buffer 的对象中缓冲数据。

The amount of data that will potentially be buffered depends on the highWaterMark option which is passed into the constructor.

被缓冲的数据量取决于传递给构造函数的 highWaterMark(最高水位线)选项。

Buffering in Readable streams happens when the implementation calls stream.push(chunk). If the consumer of the Stream does not call stream.read(), then the data will sit in the internal queue until it is consumed.

Readable 流的滞留发生于当实现调用 stream.push(chunk) 的时候。如果流的消费者没有调用 stream.read(),那么数据将会一直待在内部队列,直到它被消费。

Buffering in Writable streams happens when the user calls stream.write(chunk) repeatedly, even when write() returns false.

Writable 流的滞留发生于当用户重复调用 stream.write(chunk) 即便此时 write() 返回 false 时。

The purpose of streams, especially with the pipe() method, is to limit the buffering of data to acceptable levels, so that sources and destinations of varying speed will not overwhelm the available memory.

流,尤其是 pipe() 方法的初衷,是将数据的滞留量限制到一个可接受的水平,以使得不同速度的来源和目标不会淹没可用内存。

stream.read(0)#

There are some cases where you want to trigger a refresh of the underlying readable stream mechanisms, without actually consuming any data. In that case, you can call stream.read(0), which will always return null.

在某写情景中,您可能需要触发底层可读流机制的刷新,但不真正消费任何数据。在这中情况下,您可以调用 stream.read(0),它总会返回 null

If the internal read buffer is below the highWaterMark, and the stream is not currently reading, then calling read(0) will trigger a low-level _read call.

如果内部读取缓冲低于 highWaterMark 水位线,并且流当前不在读取状态,那么调用 read(0) 会触发一个低级 _read 调用。

There is almost never a need to do this. However, you will see some cases in Node's internals where this is done, particularly in the Readable stream class internals.

虽然几乎没有必要这么做,但您可以在 Node 内部的某些地方看到它确实这么做了,尤其是在 Readable 流类的内部。

stream.push('')#

Pushing a zero-byte string or Buffer (when not in Object mode) has an interesting side effect. Because it is a call to stream.push(), it will end the reading process. However, it does not add any data to the readable buffer, so there's nothing for a user to consume.

推入一个零字节字符串或 Buffer(当不在 对象模式 时)有一个有趣的副作用。因为它是一个对 stream.push() 的调用,它会结束 reading 进程。然而,它没有添加任何数据到可读缓冲中,所以没有东西可以被用户消费。

Very rarely, there are cases where you have no data to provide now, but the consumer of your stream (or, perhaps, another bit of your own code) will know when to check again, by calling stream.read(0). In those cases, you may call stream.push('').

在极少数情况下,您当时没有数据提供,但您的流的消费者(或您的代码的其它部分)会通过调用 stream.read(0) 得知何时再次检查。在这中情况下,您可以调用 stream.push('')

So far, the only use case for this functionality is in the tls.CryptoStream class, which is deprecated in Node v0.12. If you find that you have to use stream.push(''), please consider another approach, because it almost certainly indicates that something is horribly wrong.

到目前为止,这个功能唯一一个使用情景是在 tls.CryptoStream 类中,但它将在 Node v0.12 中被废弃。如果您发现您不得不使用 stream.push(''),请考虑另一种方式,因为几乎可以明确表明这是某种可怕的错误。

与 Node 早期版本的兼容性#

In versions of Node prior to v0.10, the Readable stream interface was simpler, but also less powerful and less useful.

在 v0.10 之前版本的 Node 中,Readable 流的接口较为简单,同时功能和实用性也较弱。

  • Rather than waiting for you to call the read() method, 'data' events would start emitting immediately. If you needed to do some I/O to decide how to handle data, then you had to store the chunks in some kind of buffer so that they would not be lost.
  • The pause() method was advisory, rather than guaranteed. This meant that you still had to be prepared to receive 'data' events even when the stream was in a paused state.

  • 'data' 事件会开始立即开始发生,而不会等待您调用 read() 方法。如果您需要进行某些 I/O 来决定如何处理数据,那么您只能将数据块储存到某种缓冲区中以防它们流失。

  • pause() 方法仅起提议作用,而不保证生效。这意味着,即便当流处于暂停状态时,您仍然需要准备接收 'data' 事件。

In Node v0.10, the Readable class described below was added. For backwards compatibility with older Node programs, Readable streams switch into "flowing mode" when a 'data' event handler is added, or when the resume() method is called. The effect is that, even if you are not using the new read() method and 'readable' event, you no longer have to worry about losing 'data' chunks.

在 Node v0.10 中,下文所述的 Readable 类被加入进来。为了向后兼容考虑,Readable 流会在添加了 'data' 事件监听器、或 resume() 方法被调用时切换至“流动模式”。其作用是,即便您不使用新的 read() 方法和 'readable' 事件,您也不必担心丢失 'data' 数据块。

Most programs will continue to function normally. However, this introduces an edge case in the following conditions:

大多数程序会维持正常功能,然而,这也会在下列条件下引入一种边界情况:

  • No 'data' event handler is added.
  • The resume() method is never called.
  • The stream is not piped to any writable destination.

  • 没有添加 'data' 事件处理器。

  • resume() 方法从未被调用。
  • 流未被导流到任何可写目标。

For example, consider the following code:

举个例子,请留意下面代码:

// 警告!不能用!
net.createServer(function(socket) {

  // we add an 'end' method, but never consume the data
  socket.on('end', function() {
    // It will never get here.
    socket.end('I got your message (but didnt read it)\n');
  });

  // 我们添加了一个 'end' 事件,但从未消费数据
  socket.on('end', function() {
    // 它永远不会到达这里
    socket.end('我收到了您的来信(但我没看它)\n');
  });

}).listen(1337);

In versions of node prior to v0.10, the incoming message data would be simply discarded. However, in Node v0.10 and beyond, the socket will remain paused forever.

在 Node v0.10 之前的版本中,传入消息数据会被简单地丢弃。然而在 Node v0.10 及之后,socket 会一直保持暂停。

The workaround in this situation is to call the resume() method to start the flow of data:

对于这种情形的妥协方式是调用 resume() 方法来开启数据流:

// 妥协
net.createServer(function(socket) {

  socket.on('end', function() {
    socket.end('I got your message (but didnt read it)\n');
  });

  socket.on('end', function() {
    socket.end('我收到了您的来信(但我没看它)\n');
  });

  // start the flow of data, discarding it.
  socket.resume();

  // 开启数据流,并丢弃它们。
  socket.resume();

}).listen(1337);

In addition to new Readable streams switching into flowing mode, pre-v0.10 style streams can be wrapped in a Readable class using the wrap() method.

额外的,对于切换到流动模式的新 Readable 流,v0.10 之前风格的流可以通过 wrap() 方法被包装成 Readable 类。

对象模式#

Normally, Streams operate on Strings and Buffers exclusively.

通常情况下,流只操作字符串和 Buffer。

Streams that are in object mode can emit generic JavaScript values other than Buffers and Strings.

处于对象模式的流除了 Buffer 和字符串外还能读出普通的 JavaScript 值。

A Readable stream in object mode will always return a single item from a call to stream.read(size), regardless of what the size argument is.

一个处于对象模式的 Readable 流调用 stream.read(size) 时总会返回单个项目,无论传入什么 size 参数。

A Writable stream in object mode will always ignore the encoding argument to stream.write(data, encoding).

一个处于对象模式的 Writable 流总是会忽略传给 stream.write(data, encoding)encoding 参数。

The special value null still retains its special value for object mode streams. That is, for object mode readable streams, null as a return value from stream.read() indicates that there is no more data, and stream.push(null) will signal the end of stream data (EOF).

特殊值 null 在对象模式流中依旧保持它的特殊性。也就说,对于对象模式的可读流,stream.read() 返回 null 意味着没有更多数据,同时 stream.push(null) 会告知流数据到达末端(EOF)。

No streams in Node core are object mode streams. This pattern is only used by userland streaming libraries.

Node 核心不存在对象模式的流,这种设计只被某些用户态流式库所使用。

You should set objectMode in your stream child class constructor on the options object. Setting objectMode mid-stream is not safe.

您应该在您的流子类构造函数的选项对象中设置 objectMode。在流的过程中设置 objectMode 是不安全的。

状态对象#

Readable streams have a member object called _readableState. Writable streams have a member object called _writableState. Duplex streams have both.

Readable 流有一个成员对象叫作 _readableStateWritable 流有一个成员对象叫作 _writableStateDuplex 流二者兼备。

These objects should generally not be modified in child classes. However, if you have a Duplex or Transform stream that should be in objectMode on the readable side, and not in objectMode on the writable side, then you may do this in the constructor by setting the flag explicitly on the appropriate state object.

这些对象通常不应该被子类所更改。然而,如果您有一个 Duplex 或 Transform 流,它的可读端应该是 objectMode,但可写端却又不是 objectMode,那么您可以在构造函数里明确地设定合适的状态对象的标记来达到此目的。

var util = require('util');
var StringDecoder = require('string_decoder').StringDecoder;
var Transform = require('stream').Transform;
util.inherits(JSONParseStream, Transform);

// Gets \n-delimited JSON string data, and emits the parsed objects
function JSONParseStream(options) {
  if (!(this instanceof JSONParseStream))
    return new JSONParseStream(options);

// 获取以 \n 分隔的 JSON 字符串数据,并丢出解析后的对象
function JSONParseStream(options) {
  if (!(this instanceof JSONParseStream))
    return new JSONParseStream(options);

  Transform.call(this, options);
  this._writableState.objectMode = false;
  this._readableState.objectMode = true;
  this._buffer = '';
  this._decoder = new StringDecoder('utf8');
}

  Transform.call(this, options);
  this._writableState.objectMode = false;
  this._readableState.objectMode = true;
  this._buffer = '';
  this._decoder = new StringDecoder('utf8');
}

JSONParseStream.prototype._transform = function(chunk, encoding, cb) {
  this._buffer += this._decoder.write(chunk);
  // split on newlines
  var lines = this._buffer.split(/\r?\n/);
  // keep the last partial line buffered
  this._buffer = lines.pop();
  for (var l = 0; l < lines.length; l++) {
    var line = lines[l];
    try {
      var obj = JSON.parse(line);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // push the parsed object out to the readable consumer
    this.push(obj);
  }
  cb();
};

JSONParseStream.prototype._transform = function(chunk, encoding, cb) {
  this._buffer += this._decoder.write(chunk);
  // 以新行分割
  var lines = this._buffer.split(/\r?\n/);
  // 保留最后一行被缓冲
  this._buffer = lines.pop();
  for (var l = 0; l < lines.length; l++) {
    var line = lines[l];
    try {
      var obj = JSON.parse(line);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // 推出解析后的对象到可读消费者
    this.push(obj);
  }
  cb();
};

JSONParseStream.prototype._flush = function(cb) {
  // 仅仅处理剩下的东西
  var rem = this._buffer.trim();
  if (rem) {
    try {
      var obj = JSON.parse(rem);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // 推出解析后的对象到可读消费者
    this.push(obj);
  }
  cb();
};

The state objects contain other useful information for debugging the state of streams in your programs. It is safe to look at them, but beyond setting option flags in the constructor, it is not safe to modify them.

状态对象包含了其它调试您的程序的流的状态时有用的信息。读取它们是可以的,但越过构造函数的选项来更改它们是不安全的

加密(Crypto)#

稳定度: 2 - 不稳定;正在讨论未来版本的API变动。会尽量减少重大变动的发生。详见下文。

Use require('crypto') to access this module.

使用 require('crypto') 来调用该模块。

The crypto module offers a way of encapsulating secure credentials to be used as part of a secure HTTPS net or http connection.

crypto模块提供在HTTPS或HTTP连接中封装安全凭证的方法.

It also offers a set of wrappers for OpenSSL's hash, hmac, cipher, decipher, sign and verify methods.

它提供OpenSSL中的一系列哈希方法,包括hmac、cipher、decipher、签名和验证等方法的封装。

crypto.getCiphers()#

Returns an array with the names of the supported ciphers.

返回一个数组,包含支持的加密算法的名字。

Example:

示例:

var ciphers = crypto.getCiphers();
console.log(ciphers); // ['AES-128-CBC', 'AES-128-CBC-HMAC-SHA1', ...]

crypto.getHashes()#

Returns an array with the names of the supported hash algorithms.

返回一个包含所支持的哈希算法的数组。

Example:

示例:

var hashes = crypto.getHashes();
console.log(hashes); // ['sha', 'sha1', 'sha1WithRSAEncryption', ...]

crypto.createCredentials(details)#

Creates a credentials object, with the optional details being a dictionary with keys:

创建一个加密凭证对象,接受一个可选的参数对象:

  • pfx : A string or buffer holding the PFX or PKCS12 encoded private key, certificate and CA certificates
  • key : A string holding the PEM encoded private key
  • passphrase : A string of passphrase for the private key or pfx
  • cert : A string holding the PEM encoded certificate
  • ca : Either a string or list of strings of PEM encoded CA certificates to trust.
  • crl : Either a string or list of strings of PEM encoded CRLs (Certificate Revocation List)
  • ciphers: A string describing the ciphers to use or exclude. Consult http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT for details on the format.

  • pfx : 一个字符串或者buffer对象,代表经PFX或者PKCS12编码产生的私钥、证书以及CA证书

  • key : 一个字符串,代表经PEM编码产生的私钥
  • passphrase : 私钥或者pfx的密码
  • cert : 一个字符串,代表经PEM编码产生的证书
  • ca : 一个字符串或者字符串数组,表示可信任的经PEM编码产生的CA证书列表
  • crl : 一个字符串或者字符串数组,表示经PEM编码产生的CRL(证书吊销列表 Certificate Revocation List)
  • ciphers: 一个字符串,表示需要使用或者排除的加密算法 可以在 http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT 查看更多关于加密算法格式的资料。

If no 'ca' details are given, then node.js will use the default publicly trusted list of CAs as given in

http://mxr.mozilla.org/mozilla/source/security/nss/lib/ckfw/builtins/certdata.txt.

如果没有指定ca,node.js会使用http://mxr.mozilla.org/mozilla/source/security/nss/lib/ckfw/builtins/certdata.txt提供的公共可信任的CA列表。

crypto.createHash(algorithm)#

Creates and returns a hash object, a cryptographic hash with the given algorithm which can be used to generate hash digests.

创建并返回一个哈希对象,一个使用所给算法的用于生成摘要的加密哈希。

algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha1', 'md5', 'sha256', 'sha512', etc. On recent releases, openssl list-message-digest-algorithms will display the available digest algorithms.

algorithm 取决与平台上所安装的 OpenSSL 版本所支持的算法。比如 'sha1''md5''sha256''sha512' 等等。在最近的发行版本中,openssl list-message-digest-algorithms 会显示可用的摘要算法。

Example: this program that takes the sha1 sum of a file

例子:这段程序会计算出一个文件的 sha1 摘要值。

s.on('end', function() {
  var d = shasum.digest('hex');
  console.log(d + '  ' + filename);
});

类: Hash#

The class for creating hash digests of data.

创建数据哈希摘要的类。

It is a stream that is both readable and writable. The written data is used to compute the hash. Once the writable side of the stream is ended, use the read() method to get the computed hash digest. The legacy update and digest methods are also supported.

它是一个既可读又可写的。所写入的数据会被用作计算哈希。当流的可写端终止后,使用 read() 方法来获取计算得的哈希摘要。同时也支持旧有的 updatedigest 方法。

Returned by crypto.createHash.

通过 crypto.createHash 返回。

hash.update(data, [input_encoding])#

Updates the hash content with the given data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If no encoding is provided, then a buffer is expected.

通过提供的数据更新哈希对象,可以通过input_encoding指定编码为'utf8''ascii'或者 'binary'。如果没有指定编码,将作为二进制数据(buffer)处理。

This can be called many times with new data as it is streamed.

因为它是流式数据,所以可以使用不同的数据调用很多次。

hash.digest([encoding])#

Calculates the digest of all of the passed data to be hashed. The encoding can be 'hex', 'binary' or 'base64'. If no encoding is provided, then a buffer is returned.

计算传入的所有数据的摘要值。encoding可以是'hex''binary'或者'base64',如果没有指定,会返回一个buffer对象。

Note: hash object can not be used after digest() method has been called.

注意:hash 对象在 digest() 方法被调用后将不可用。

crypto.createHmac(algorithm, key)#

Creates and returns a hmac object, a cryptographic hmac with the given algorithm and key.

创建并返回一个hmac对象,也就是通过给定的加密算法和密钥生成的加密图谱(cryptographic)。

It is a stream that is both readable and writable. The written data is used to compute the hmac. Once the writable side of the stream is ended, use the read() method to get the computed digest. The legacy update and digest methods are also supported.

它是一个既可读又可写的流(stream)。写入的数据会被用于计算hmac。写入终止后,可以使用read()方法获取计算后的摘要值。之前版本的updatedigest方法仍然支持。

algorithm is dependent on the available algorithms supported by OpenSSL - see createHash above. key is the hmac key to be used.

algorithm在OpenSSL支持的算法列表中被抛弃了——见上方createHash部分。key是hmac算法用到的密钥。

Class: Hmac#

Class for creating cryptographic hmac content.

用于创建hmac加密图谱(cryptographic)的类。

Returned by crypto.createHmac.

crypto.createHmac返回。

hmac.update(data)#

Update the hmac content with the given data. This can be called many times with new data as it is streamed.

通过提供的数据更新hmac对象。因为它是流式数据,所以可以使用新数据调用很多次。

hmac.digest([encoding])#

Calculates the digest of all of the passed data to the hmac. The encoding can be 'hex', 'binary' or 'base64'. If no encoding is provided, then a buffer is returned.

计算传入的所有数据的hmac摘要值。encoding可以是'hex''binary'或者'base64',如果没有指定,会返回一个buffer对象。

Note: hmac object can not be used after digest() method has been called.

注意: hmac对象在调用digest()之后就不再可用了。

crypto.createCipher(algorithm, password)#

Creates and returns a cipher object, with the given algorithm and password.

用给定的算法和密码,创建并返回一个cipher加密算法的对象。(译者:cipher 就是加密算法的意思, ssl 的 cipher 主要是对称加密算法和不对称加密算法的组合。)

algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent releases, openssl list-cipher-algorithms will display the available cipher algorithms. password is used to derive key and IV, which must be a 'binary' encoded string or a buffer.

algorithm算法是依赖OpenSSL库的, 例如: 'aes192'算法等。在最近发布的版本, 执行命令 openssl list-cipher-algorithms 就会显示出所有可用的加密算法,password是用来派生key和IV的,它必须是一个 'binary' 2进制格式的字符串或者是一个buffer。(译者:key表示密钥,IV表示向量在加密过程和解密过程都要使用)

It is a stream that is both readable and writable. The written data is used to compute the hash. Once the writable side of the stream is ended, use the read() method to get the computed hash digest. The legacy update and digest methods are also supported.

它是一个既可读又可写的。所写入的数据会被用作计算哈希。当流的可写端终止后,使用 read() 方法来获取计算得的哈希摘要。同时也支持旧有的 updatedigest 方法。

crypto.createCipheriv(algorithm, key, iv)#

Creates and returns a cipher object, with the given algorithm, key and iv.

用给定的算法、密码和向量,创建并返回一个cipher加密算法的对象。

algorithm is the same as the argument to createCipher(). key is the raw key used by the algorithm. iv is an initialization vector.

algorithm算法和createCipher() 方法的参数相同. key密钥是一个被算法使用的原始密钥,iv是一个初始化向量

key and iv must be 'binary' encoded strings or buffers.

key密钥和iv向量必须是'binary'2进制格式的字符串或buffers.

Class: Cipher#

Class for encrypting data.

这个类是用来加密数据的。

Returned by crypto.createCipher and crypto.createCipheriv.

这个类由 crypto.createCiphercrypto.createCipheriv 返回。

Cipher objects are streams that are both readable and writable. The written plain text data is used to produce the encrypted data on the readable side. The legacy update and final methods are also supported.

Cipher加密对象是 streams,他是具有 readable 可读和 writable 可写的。写入的纯文本数据是用来在可读流一侧加密数据的。 以前版本的updatefinal方法也还是支持的。

cipher.update(data, [input_encoding], [output_encoding])#

Updates the cipher with data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If no encoding is provided, then a buffer is expected.

data参数更新cipher加密对象, 它的编码input_encoding必须是下列给定编码的 'utf8', 'ascii' or 'binary' 中一种。如果没有编码参数,那么打他参数必须是一个buffer。

The output_encoding specifies the output format of the enciphered data, and can be 'binary', 'base64' or 'hex'. If no encoding is provided, then a buffer is returned.

参数 output_encoding输出编码指定了加密数据的输出格式,可以是'binary', 'base64' 或者'hex',如果没有提供这个参数,buffer将会返回。

Returns the enciphered contents, and can be called many times with new data as it is streamed.

返回加密内容,并且Returns the enciphered contents, 用新数据作为流的话,它可以被调用多次。

cipher.final([output_encoding])#

Returns any remaining enciphered contents, with output_encoding being one of: 'binary', 'base64' or 'hex'. If no encoding is provided, then a buffer is returned.

返回剩余的加密内容,output_encoding'binary', 'base64''hex'中的任意一个。 如果没有提供编码格式,则返回一个buffer对象。

Note: cipher object can not be used after final() method has been called.

注: 调用final()函数后cipher 对象不能被使用。

cipher.setAutoPadding(auto_padding=true)#

You can disable automatic padding of the input data to block size. If auto_padding is false, the length of the entire input data must be a multiple of the cipher's block size or final will fail. Useful for non-standard padding, e.g. using 0x0 instead of PKCS padding. You must call this before cipher.final.

对于将输入数据自动填充到块大小的功能,你可以将其禁用。如果auto_padding是false, 那么整个输入数据的长度必须是加密器的块大小的整倍数,否则final会失败。这对非标准的填充很有用,例如使用0x0而不是PKCS的填充。这个函数必须在cipher.final之前调用。

crypto.createDecipher(algorithm, password)#

Creates and returns a decipher object, with the given algorithm and key. This is the mirror of the createCipher() above.

根据给定的算法和密钥,创建并返回一个解密器对象。这是上述createCipher()的一个镜像。

crypto.createDecipheriv(algorithm, key, iv)#

Creates and returns a decipher object, with the given algorithm, key and iv. This is the mirror of the createCipheriv() above.

Creates and returns a decipher object, with the given algorithm, key and iv. This is the mirror of the createCipheriv() above. 根据给定的算法,密钥和初始化向量,创建并返回一个解密器对象。这是上述createCipheriv()的一个镜像。

Class: Decipher#

Class for decrypting data.

解密数据的类。

Returned by crypto.createDecipher and crypto.createDecipheriv.

crypto.createDeciphercrypto.createDecipheriv返回。

Decipher objects are streams that are both readable and writable. The written enciphered data is used to produce the plain-text data on the the readable side. The legacy update and final methods are also supported.

解密器对象是可读写的对象。用被写入的加密数据生成可读的平文数据。解码器对象也支持The legacy updatefinal函数。

decipher.update(data, [input_encoding], [output_encoding])#

Updates the decipher with data, which is encoded in 'binary', 'base64' or 'hex'. If no encoding is provided, then a buffer is expected.

data来更新解密器,其中data'binary', 'base64''hex'进行编码。如果没有指明编码方式,则默认data是一个buffer对象。

The output_decoding specifies in what format to return the deciphered plaintext: 'binary', 'ascii' or 'utf8'. If no encoding is provided, then a buffer is returned.

output_decoding指明了用以下哪种编码方式返回解密后的平文:'binary', 'ascii''utf8'。如果没有指明编码方式,则返回一个buffer对象。

decipher.final([output_encoding])#

Returns any remaining plaintext which is deciphered, with output_encoding being one of: 'binary', 'ascii' or 'utf8'. If no encoding is provided, then a buffer is returned.

返回剩余的加密内容,output_encoding'binary', 'ascii''utf8'中的任意一个。如果没有指明编码方式,则返回一个buffer对象。

Note: decipher object can not be used after final() method has been called.

注: 调用final()函数后不能使用decipher 对象。

decipher.setAutoPadding(auto_padding=true)#

You can disable auto padding if the data has been encrypted without standard block padding to prevent decipher.final from checking and removing it. Can only work if the input data's length is a multiple of the ciphers block size. You must call this before streaming data to decipher.update.

如果数据以非标准的块填充方式被加密,那么你可以禁用自动填充来防止decipher.final对数据进行检查和移除。这只有在输入数据的长度是加密器块大小的整倍数时才有效。这个函数必须在将数据流传递给decipher.update之前调用。

crypto.createSign(algorithm)#

Creates and returns a signing object, with the given algorithm. On recent OpenSSL releases, openssl list-public-key-algorithms will display the available signing algorithms. Examples are 'RSA-SHA256'.

根据给定的算法,创建并返回一个signing对象。在最近的OpenSSL发布版本中,openssl list-public-key-algorithms会列出可用的签名算法,例如'RSA-SHA256'

Class: Sign#

Class for generating signatures.

生成数字签名的类

Returned by crypto.createSign.

crypto.createSign返回。

Sign objects are writable streams. The written data is used to generate the signature. Once all of the data has been written, the sign method will return the signature. The legacy update method is also supported.

Sign对象是可写的对象。被写入的数据用来生成数字签名。当所有的数据都被写入后,sign 函数会返回数字签名。Sign对象也支持The legacy update函数。

sign.update(data)#

Updates the sign object with data. This can be called many times with new data as it is streamed.

data来更新sign对象。 This can be called many times with new data as it is streamed.

sign.sign(private_key, [output_format])#

Calculates the signature on all the updated data passed through the sign. private_key is a string containing the PEM encoded private key for signing.

根据所有传送给sign的更新数据来计算电子签名。private_key是一个包含了签名私钥的字符串,而该私钥是用PEM编码的。

Returns the signature in output_format which can be 'binary', 'hex' or 'base64'. If no encoding is provided, then a buffer is returned.

返回一个数字签名,该签名的格式可以是'binary', 'hex''base64'. 如果没有指明编码方式,则返回一个buffer对象。

Note: sign object can not be used after sign() method has been called.

注:调用sign()后不能使用sign对象。

crypto.createVerify(algorithm)#

Creates and returns a verification object, with the given algorithm. This is the mirror of the signing object above.

根据指明的算法,创建并返回一个验证器对象。这是上述签名器对象的镜像。

Class: Verify#

Class for verifying signatures.

用来验证数字签名的类。

Returned by crypto.createVerify.

crypto.createVerify返回。

Verify objects are writable streams. The written data is used to validate against the supplied signature. Once all of the data has been written, the verify method will return true if the supplied signature is valid. The legacy update method is also supported.

验证器对象是可写的对象. 被写入的数据会被用来验证提供的数字签名。在所有的数据被写入后,如果提供的数字签名有效,verify函数会返回真。验证器对象也支持 The legacy update函数。

verifier.update(data)#

Updates the verifier object with data. This can be called many times with new data as it is streamed.

用数据更新验证器对象。This can be called many times with new data as it is streamed.

verifier.verify(object, signature, [signature_format])#

Verifies the signed data by using the object and signature. object is a string containing a PEM encoded object, which can be one of RSA public key, DSA public key, or X.509 certificate. signature is the previously calculated signature for the data, in the signature_format which can be 'binary', 'hex' or 'base64'. If no encoding is specified, then a buffer is expected.

objectsignature来验证被签名的数据。 object是一个字符串,这个字符串包含了一个被PEM编码的对象,这个对象可以是RSA公钥,DSA公钥或者X.509 证书。 signature是之前计算出来的数字签名,其中的 signature_format可以是'binary', 'hex''base64'. 如果没有指明编码方式,那么默认是一个buffer对象。

Returns true or false depending on the validity of the signature for the data and public key.

根据数字签名对于数据和公钥的有效性,返回true或false。

Note: verifier object can not be used after verify() method has been called.

注: 调用verify()函数后不能使用verifier对象。

crypto.createDiffieHellman(prime_length)#

Creates a Diffie-Hellman key exchange object and generates a prime of the given bit length. The generator used is 2.

创建一个迪菲-赫尔曼密钥交换(Diffie-Hellman key exchange)对象,并根据给定的位长度生成一个质数。所用的生成器是s

crypto.createDiffieHellman(prime, [encoding])#

Creates a Diffie-Hellman key exchange object using the supplied prime. The generator used is 2. Encoding can be 'binary', 'hex', or 'base64'. If no encoding is specified, then a buffer is expected.

根据给定的质数创建一个迪菲-赫尔曼密钥交换(Diffie-Hellman key exchange)对象。 所用的生成器是2。编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则默认是一个buffer对象。

Class: DiffieHellman#

The class for creating Diffie-Hellman key exchanges.

创建迪菲-赫尔曼密钥交换(Diffie-Hellman key exchanges)的类。

Returned by crypto.createDiffieHellman.

crypto.createDiffieHellman返回。

diffieHellman.generateKeys([encoding])#

Generates private and public Diffie-Hellman key values, and returns the public key in the specified encoding. This key should be transferred to the other party. Encoding can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

生成迪菲-赫尔曼(Diffie-Hellman)算法的公钥和私钥,并根据指明的编码方式返回公钥。这个公钥可以转交给第三方。编码方式可以是 'binary', 'hex''base64'. 如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.computeSecret(other_public_key, [input_encoding], [output_encoding])#

Computes the shared secret using other_public_key as the other party's public key and returns the computed shared secret. Supplied key is interpreted using specified input_encoding, and secret is encoded using specified output_encoding. Encodings can be 'binary', 'hex', or 'base64'. If the input encoding is not provided, then a buffer is expected.

other_public_key作为第三方公钥来计算共享秘密,并返回这个共享秘密。参数中的密钥会以input_encoding编码方式来解读,而共享密钥则会用output_encoding进行编码。编码方式可以是'binary', 'hex''base64'。如果没有提供输入的编码方式,则默认为一个buffer对象。

If no output encoding is given, then a buffer is returned.

如果没有指明输出的编码方式,则返回一个buffer对象。

diffieHellman.getPrime([encoding])#

Returns the Diffie-Hellman prime in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)质数,其中编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.getGenerator([encoding])#

Returns the Diffie-Hellman prime in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)质数,其中编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.getPublicKey([encoding])#

Returns the Diffie-Hellman public key in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)公钥,其中编码方式可以是'binary', 'hex''base64'。 如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.getPrivateKey([encoding])#

Returns the Diffie-Hellman private key in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)私钥,其中编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.setPublicKey(public_key, [encoding])#

Sets the Diffie-Hellman public key. Key encoding can be 'binary', 'hex' or 'base64'. If no encoding is provided, then a buffer is expected.

设置迪菲-赫尔曼(Diffie-Hellman)公钥,编码方式可以是可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.setPrivateKey(private_key, [encoding])#

Sets the Diffie-Hellman private key. Key encoding can be 'binary', 'hex' or 'base64'. If no encoding is provided, then a buffer is expected.

设置迪菲-赫尔曼(Diffie-Hellman)私钥,编码方式可以是可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

crypto.getDiffieHellman(group_name)#

Creates a predefined Diffie-Hellman key exchange object. The supported groups are: 'modp1', 'modp2', 'modp5' (defined in RFC 2412) and 'modp14', 'modp15', 'modp16', 'modp17', 'modp18' (defined in RFC 3526). The returned object mimics the interface of objects created by crypto.createDiffieHellman() above, but will not allow to change the keys (with diffieHellman.setPublicKey() for example). The advantage of using this routine is that the parties don't have to generate nor exchange group modulus beforehand, saving both processor and communication time.

创建一个预定义的迪菲-赫尔曼密钥交换(Diffie-Hellman key exchanges)对象。支持以下的D-H组:'modp1', 'modp2', 'modp5' (在RFC 2412中定义) 和 'modp14', 'modp15', 'modp16', 'modp17', 'modp18' (在 RFC 3526中定义)。返回的对象模仿了上述 crypto.createDiffieHellman()方法所创建的对象的接口,但不会晕允许密钥交换 (例如像 diffieHellman.setPublicKey()那样)。执行这套流程的好处是双方不需要事先生成或交换组余数,节省了处理和通信时间。

Example (obtaining a shared secret):

例子 (获取一个共享秘密):

/* alice_secret和 bob_secret应该是一样的 */
console.log(alice_secret == bob_secret);

crypto.pbkdf2(password, salt, iterations, keylen, callback)#

Asynchronous PBKDF2 applies pseudorandom function HMAC-SHA1 to derive a key of given length from the given password, salt and iterations. The callback gets two arguments (err, derivedKey).

异步PBKDF2提供了一个伪随机函数 HMAC-SHA1,根据给定密码的长度,salt和iterations来得出一个密钥。回调函数得到两个参数 (err, derivedKey)

crypto.pbkdf2Sync(password, salt, iterations, keylen)#

Synchronous PBKDF2 function. Returns derivedKey or throws error.

同步 PBKDF2 函数。返回derivedKey或抛出一个错误。

crypto.randomBytes(size, [callback])#

Generates cryptographically strong pseudo-random data. Usage:

生成密码学强度的伪随机数据。用法:

// 同步
try {
  var buf = crypto.randomBytes(256);
  console.log('有 %d 字节的随机数据: %s', buf.length, buf);
} catch (ex) {
  // handle error
}

crypto.pseudoRandomBytes(size, [callback])#

Generates non-cryptographically strong pseudo-random data. The data returned will be unique if it is sufficiently long, but is not necessarily unpredictable. For this reason, the output of this function should never be used where unpredictability is important, such as in the generation of encryption keys.

生成密码学强度的伪随机数据。如果数据足够长的话会返回一个唯一的数据,但这个返回值不一定是不可预料的。基于这个原因,当不可预料性很重要时,这个函数的返回值永远都不应该被使用,例如在生成加密的密钥时。

Usage is otherwise identical to crypto.randomBytes.

用法与 crypto.randomBytes一模一样。

crypto.DEFAULT_ENCODING#

The default encoding to use for functions that can take either strings or buffers. The default value is 'buffer', which makes it default to using Buffer objects. This is here to make the crypto module more easily compatible with legacy programs that expected 'binary' to be the default encoding.

对于可以接受字符串或buffer对象的函数的默认编码方式。默认值是'buffer',所以默认使用Buffer对象。这是为了让crypto模块与默认'binary'为编码方式的遗留程序更容易兼容。

Note that new programs will probably expect buffers, so only use this as a temporary measure.

要注意,新的程序会期待buffer对象,所以使用这个时请只作为暂时的手段。

Recent API Changes#

The Crypto module was added to Node before there was the concept of a unified Stream API, and before there were Buffer objects for handling binary data.

早在统一的流API概念出现,以及引入Buffer对象来处理二进制数据之前,Crypto模块就被添加到Node。

As such, the streaming classes don't have the typical methods found on other Node classes, and many methods accepted and returned Binary-encoded strings by default rather than Buffers. This was changed to use Buffers by default instead.

因为这样,与流有关的类中并没有其它Node类的典型函数,而且很多函数接受和返回默认的二进制编码的字符串,而不是Buffer对象。在最近的修改中,这些函数都被改成默认使用Buffer对象。

This is a breaking change for some use cases, but not all.

这对于某些(但不是全部)使用场景来讲是重大的改变。

For example, if you currently use the default arguments to the Sign class, and then pass the results to the Verify class, without ever inspecting the data, then it will continue to work as before. Where you once got a binary string and then presented the binary string to the Verify object, you'll now get a Buffer, and present the Buffer to the Verify object.

例如,如果你现在使用Sign类的默认参数,然后在没有检查数据的情况下,将结果传递给Verify类,那么程序会照常工作。在以前,你会拿到一个二进制字符串,然后它传递给Verify对象;而现在,你会得到一个Buffer对象,然后把它传递给Verify对象。

However, if you were doing things with the string data that will not work properly on Buffers (such as, concatenating them, storing in databases, etc.), or you are passing binary strings to the crypto functions without an encoding argument, then you will need to start providing encoding arguments to specify which encoding you'd like to use. To switch to the previous style of using binary strings by default, set the crypto.DEFAULT_ENCODING field to 'binary'. Note that new programs will probably expect buffers, so only use this as a temporary measure.

但是,如果你以前是使用那些在Buffer对象上不能正常工作的字符串数据,或者以默认编码方式将二进制数据传递给加密函数的话,那你就要开始提供编码方式参数来指明你想使用的编码方式了。如果想准换回旧的风格默认使用二进制字符串,那么你需要把crypto.DEFAULT_ENCODING字段设为'binary'。但请注意,因为新的程序很可能会期望buffer对象,所以仅将此当做临时手段。

TLS (SSL)#

稳定度: 3 - 稳定

Use require('tls') to access this module.

使用 require('tls') 来访问此模块。

The tls module uses OpenSSL to provide Transport Layer Security and/or Secure Socket Layer: encrypted stream communication.

tls 模块使用 OpenSSL 来提供传输层安全协议(Transport Layer Security)和/或安全套接层(Secure Socket Layer):加密过的流通讯。

TLS/SSL is a public/private key infrastructure. Each client and each server must have a private key. A private key is created like this

TLS/SSL 是一种公钥/私钥架构。每个客户端和服务器都必有一个私钥。一个私钥使用类似的方式创建:

openssl genrsa -out ryans-key.pem 1024

All severs and some clients need to have a certificate. Certificates are public keys signed by a Certificate Authority or self-signed. The first step to getting a certificate is to create a "Certificate Signing Request" (CSR) file. This is done with:

所有服务器和某些客户端需要具备证书。证书是证书办法机构签发或自签发的公钥。获取证书的第一步是创建一个“证书签发申请”(CSR)文件。使用这条命令完成:

openssl req -new -key ryans-key.pem -out ryans-csr.pem

To create a self-signed certificate with the CSR, do this:

像这样使用 CSR 创建一个自签名证书:

openssl x509 -req -in ryans-csr.pem -signkey ryans-key.pem -out ryans-cert.pem

Alternatively you can send the CSR to a Certificate Authority for signing.

又或者你可以将 CSR 发送给一个数字证书认证机构请求签名。

(TODO: docs on creating a CA, for now interested users should just look at test/fixtures/keys/Makefile in the Node source code)

To create .pfx or .p12, do this:

像这样创建 .pfx 或 .p12:

openssl pkcs12 -export -in agent5-cert.pem -inkey agent5-key.pem \
    -certfile ca-cert.pem -out agent5.pfx
  • in: certificate
  • inkey: private key
  • certfile: all CA certs concatenated in one file like cat ca1-cert.pem ca2-cert.pem > ca-cert.pem

  • in: certificate

  • inkey: private key
  • certfile: all CA certs concatenated in one file like cat ca1-cert.pem ca2-cert.pem > ca-cert.pem

Client-initiated renegotiation attack mitigation#

The TLS protocol lets the client renegotiate certain aspects of the TLS session. Unfortunately, session renegotiation requires a disproportional amount of server-side resources, which makes it a potential vector for denial-of-service attacks.

TLS协议会令客户端可以重新协商TLS会话的某些方面。但是,会话的重新协商是需要相应量的服务器端资源的,所以导致其变成一个阻断服务攻击(denial-of-service)的潜在媒介。

To mitigate this, renegotiations are limited to three times every 10 minutes. An error is emitted on the tls.TLSSocket instance when the threshold is exceeded. The limits are configurable:

为了减低这种情况的发生,重新协商被限制在每10分钟三次。如果超过这个数目,那么在tls.TLSSocket实例上就会分发一个错误。这个限制是可设置的:

  • tls.CLIENT_RENEG_LIMIT: renegotiation limit, default is 3.

  • tls.CLIENT_RENEG_LIMIT: 重新协商的次数限制,默认为3。

  • tls.CLIENT_RENEG_WINDOW: renegotiation window in seconds, default is 10 minutes.

  • tls.CLIENT_RENEG_WINDOW: 重新协商窗口的秒数,默认为600(10分钟)。

Don't change the defaults unless you know what you are doing.

除非你完全理解整个机制和清楚自己要干什么,否则不要改变这个默认值。

To test your server, connect to it with openssl s_client -connect address:port and tap R<CR> (that's the letter R followed by a carriage return) a few times.

要测试你的服务器的话,用命令 openssl s_client -connect 地址:端口连接上服务器,然后敲击R<CR>(字母键R加回车键)几次。

NPN 和 SNI#

NPN (Next Protocol Negotiation) and SNI (Server Name Indication) are TLS handshake extensions allowing you:

  • NPN - to use one TLS server for multiple protocols (HTTP, SPDY)
  • SNI - to use one TLS server for multiple hostnames with different SSL certificates.

tls.getCiphers()#

Returns an array with the names of the supported SSL ciphers.

返回一个数组,其中包含了所支持的SSL加密器的名字。

Example:

示例:

var ciphers = tls.getCiphers();
console.log(ciphers); // ['AES128-SHA', 'AES256-SHA', ...]

tls.createServer(options, [secureConnectionListener])#

Creates a new tls.Server. The connectionListener argument is automatically set as a listener for the secureConnection event. The options object has these possibilities:

新建一个新的 tls.Server. The connectionListener 参数会自动设置为 secureConnection 事件的监听器. 这个 options 对象有这些可能性:

  • pfx: A string or Buffer containing the private key, certificate and CA certs of the server in PFX or PKCS12 format. (Mutually exclusive with the key, cert and ca options.)

  • pfx: 一个String 或Buffer包含了私钥, 证书和CA certs, 一般是 PFX 或者 PKCS12 格式. (Mutually exclusive with the key, cert and ca options.)

  • key: A string or Buffer containing the private key of the server in PEM format. (Required)

  • key: 一个字符串或 Buffer对象,其中包含了PEF格式的服务器的私钥。 (必需)

  • passphrase: A string of passphrase for the private key or pfx.

  • passphrase: 私钥或pfx密码的字符串。

  • cert: A string or Buffer containing the certificate key of the server in PEM format. (Required)

  • ca: An array of strings or Buffers of trusted certificates. If this is omitted several well known "root" CAs will be used, like VeriSign. These are used to authorize connections.

  • crl : Either a string or list of strings of PEM encoded CRLs (Certificate Revocation List)

  • ciphers: A string describing the ciphers to use or exclude.

    NOTE: Previous revisions of this section suggested AES256-SHA as an acceptable cipher. Unfortunately, AES256-SHA is a CBC cipher and therefore susceptible to BEAST attacks. Do not use it.

  • handshakeTimeout: Abort the connection if the SSL/TLS handshake does not finish in this many milliseconds. The default is 120 seconds.

    A 'clientError' is emitted on the tls.Server object whenever a handshake times out.

  • honorCipherOrder : When choosing a cipher, use the server's preferences instead of the client preferences.

    Although, this option is disabled by default, it is recommended that you use this option in conjunction with the ciphers option to mitigate BEAST attacks.

  • requestCert: If true the server will request a certificate from clients that connect and attempt to verify that certificate. Default: false.

  • rejectUnauthorized: If true the server will reject any connection which is not authorized with the list of supplied CAs. This option only has an effect if requestCert is true. Default: false.

  • NPNProtocols: An array or Buffer of possible NPN protocols. (Protocols should be ordered by their priority).

  • SNICallback(servername, cb): A function that will be called if client supports SNI TLS extension. Two argument will be passed to it: servername, and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (You can use crypto.createCredentials(...).context to get proper SecureContext). If SNICallback wasn't provided - default callback with high-level API will be used (see below).

  • sessionTimeout: An integer specifying the seconds after which TLS session identifiers and TLS session tickets created by the server are timed out. See SSL_CTX_set_timeout for more details.

  • sessionIdContext: A string containing a opaque identifier for session resumption. If requestCert is true, the default is MD5 hash value generated from command-line. Otherwise, the default is not provided.

  • secureProtocol: The SSL method to use, e.g. SSLv3_method to force SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant SSL_METHODS.

Here is a simple example echo server:

这是一个简单的应答服务器例子:

var server = tls.createServer(options, function(socket) {
  console.log('服务器已连接',
              socket.authorized ? '已授权' : '未授权');
  socket.write("欢迎!\n");
  socket.setEncoding('utf8');
  socket.pipe(socket);
});
server.listen(8000, function() {
  console.log('server bound');
});

Or

或者

};

var server = tls.createServer(options, function(socket) {
  console.log('server connected',
              socket.authorized ? 'authorized' : 'unauthorized');
  socket.write("welcome!\n");
  socket.setEncoding('utf8');
  socket.pipe(socket);
});
server.listen(8000, function() {
  console.log('server bound');
});

You can test this server by connecting to it with openssl s_client:

var server = tls.createServer(options, function(socket) {
  console.log('服务器已连接',
              socket.authorized ? '已授权' : '未授权');
  socket.write("欢迎!\n");
  socket.setEncoding('utf8');
  socket.pipe(socket);
});
server.listen(8000, function() {
  console.log('服务器已绑定');
});

您可以使用 openssl s_client 连接这个服务器来测试:

openssl s_client -connect 127.0.0.1:8000

tls.connect(options, [callback])#

tls.connect(port, [host], [options], [callback])#

tls.connect(options, [callback])#

tls.connect(port, [host], [options], [callback])#

Creates a new client connection to the given port and host (old API) or options.port and options.host. (If host is omitted, it defaults to localhost.) options should be an object which specifies:

  • host: Host the client should connect to

  • port: Port the client should connect to

  • socket: Establish secure connection on a given socket rather than creating a new socket. If this option is specified, host and port are ignored.

  • pfx: A string or Buffer containing the private key, certificate and CA certs of the server in PFX or PKCS12 format.

  • key: A string or Buffer containing the private key of the client in PEM format.

  • passphrase: A string of passphrase for the private key or pfx.

  • passphrase: 私钥或pfx密码的字符串。

  • cert: A string or Buffer containing the certificate key of the client in PEM format.

  • ca: An array of strings or Buffers of trusted certificates. If this is omitted several well known "root" CAs will be used, like VeriSign. These are used to authorize connections.

  • rejectUnauthorized: If true, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails. Default: true.

  • NPNProtocols: An array of string or Buffer containing supported NPN protocols. Buffer should have following format: 0x05hello0x05world, where first byte is next protocol name's length. (Passing array should usually be much simpler: ['hello', 'world'].)

  • servername: Servername for SNI (Server Name Indication) TLS extension.

  • secureProtocol: The SSL method to use, e.g. SSLv3_method to force SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant SSL_METHODS.

The callback parameter will be added as a listener for the 'secureConnect' event.

callback参数会被作为监听器添加到'secureConnect'事件。

tls.connect() returns a tls.TLSSocket object.

tls.connect()返回一个tls.TLSSocket对象。

Here is an example of a client of echo server as described previously:

下面是一个上述应答服务器的客户端的例子:

var socket = tls.connect(8000, options, function() {
  console.log('client connected',
              socket.authorized ? 'authorized' : 'unauthorized');
  process.stdin.pipe(socket);
  process.stdin.resume();
});
socket.setEncoding('utf8');
socket.on('data', function(data) {
  console.log(data);
});
socket.on('end', function() {
  server.close();
});

Or

或者

var socket = tls.connect(8000, options, function() {
  console.log('client connected',
              socket.authorized ? 'authorized' : 'unauthorized');
  process.stdin.pipe(socket);
  process.stdin.resume();
});
socket.setEncoding('utf8');
socket.on('data', function(data) {
  console.log(data);
});
socket.on('end', function() {
  server.close();
});

类: tls.TLSSocket#

Wrapper for instance of net.Socket, replaces internal socket read/write routines to perform transparent encryption/decryption of incoming/outgoing data.

new tls.TLSSocket(socket, options)#

Construct a new TLSSocket object from existing TCP socket.

socket is an instance of net.Socket

socket是一个net.Socket示例。

options is an object that might contain following properties:

options是一个可能包含以下属性的对象:

tls.createSecurePair([credentials], [isServer], [requestCert], [rejectUnauthorized])#

稳定性: 0 - 已废弃。使用 tls.TLSSocket 替代。

Creates a new secure pair object with two streams, one of which reads/writes encrypted data, and one reads/writes cleartext data. Generally the encrypted one is piped to/from an incoming encrypted data stream, and the cleartext one is used as a replacement for the initial encrypted stream.

  • credentials: A credentials object from crypto.createCredentials( ... )

  • isServer: A boolean indicating whether this tls connection should be opened as a server or a client.

  • requestCert: A boolean indicating whether a server should request a certificate from a connecting client. Only applies to server connections.

  • rejectUnauthorized: A boolean indicating whether a server should automatically reject clients with invalid certificates. Only applies to servers with requestCert enabled.

tls.createSecurePair() returns a SecurePair object with cleartext and encrypted stream properties.

NOTE: cleartext has the same APIs as tls.TLSSocket

类: SecurePair#

Returned by tls.createSecurePair.

由tls.createSecurePair返回。

事件: 'secure'#

The event is emitted from the SecurePair once the pair has successfully established a secure connection.

Similarly to the checking for the server 'secureConnection' event, pair.cleartext.authorized should be checked to confirm whether the certificate used properly authorized.

类: tls.Server#

This class is a subclass of net.Server and has the same methods on it. Instead of accepting just raw TCP connections, this accepts encrypted connections using TLS or SSL.

事件: 'secureConnection'#

function (tlsSocket) {}

function (tlsSocket) {}

This event is emitted after a new connection has been successfully handshaked. The argument is a instance of tls.TLSSocket. It has all the common stream methods and events.

socket.authorized is a boolean value which indicates if the client has verified by one of the supplied certificate authorities for the server. If socket.authorized is false, then socket.authorizationError is set to describe how authorization failed. Implied but worth mentioning: depending on the settings of the TLS server, you unauthorized connections may be accepted. socket.npnProtocol is a string containing selected NPN protocol. socket.servername is a string containing servername requested with SNI.

Event: 'clientError'#

function (exception, tlsSocket) { }

function (exception, tlsSocket) { }

When a client connection emits an 'error' event before secure connection is established - it will be forwarded here.

tlsSocket is the tls.TLSSocket that the error originated from.

事件: 'newSession'#

function (sessionId, sessionData) { }

function (sessionId, sessionData) { }

Emitted on creation of TLS session. May be used to store sessions in external storage.

NOTE: adding this event listener will have an effect only on connections established after addition of event listener.

事件: 'resumeSession'#

function (sessionId, callback) { }

function (sessionId, callback) { }

Emitted when client wants to resume previous TLS session. Event listener may perform lookup in external storage using given sessionId, and invoke callback(null, sessionData) once finished. If session can't be resumed (i.e. doesn't exist in storage) one may call callback(null, null). Calling callback(err) will terminate incoming connection and destroy socket.

NOTE: adding this event listener will have an effect only on connections established after addition of event listener.

server.listen(port, [host], [callback])#

Begin accepting connections on the specified port and host. If the host is omitted, the server will accept connections directed to any IPv4 address (INADDR_ANY).

This function is asynchronous. The last parameter callback will be called when the server has been bound.

See net.Server for more information.

更多信息见net.Server

server.close()#

Stops the server from accepting new connections. This function is asynchronous, the server is finally closed when the server emits a 'close' event.

server.address()#

Returns the bound address, the address family name and port of the server as reported by the operating system. See net.Server.address() for more information.

server.addContext(hostname, credentials)#

Add secure context that will be used if client request's SNI hostname is matching passed hostname (wildcards can be used). credentials can contain key, cert and ca.

server.maxConnections#

Set this property to reject connections when the server's connection count gets high.

server.connections#

The number of concurrent connections on the server.

服务器的并发连接数.

类: CryptoStream#

稳定性: 0 - 已废弃。使用 tls.TLSSocket 替代。

This is an encrypted stream.

这是一个被加密的流。

cryptoStream.bytesWritten#

A proxy to the underlying socket's bytesWritten accessor, this will return the total bytes written to the socket, including the TLS overhead.

类: tls.TLSSocket#

This is a wrapped version of net.Socket that does transparent encryption of written data and all required TLS negotiation.

This instance implements a duplex Stream interfaces. It has all the common stream methods and events.

事件: 'secureConnect'#

This event is emitted after a new connection has been successfully handshaked. The listener will be called no matter if the server's certificate was authorized or not. It is up to the user to test tlsSocket.authorized to see if the server certificate was signed by one of the specified CAs. If tlsSocket.authorized === false then the error can be found in tlsSocket.authorizationError. Also if NPN was used - you can check tlsSocket.npnProtocol for negotiated protocol.

tlsSocket.authorized#

A boolean that is true if the peer certificate was signed by one of the specified CAs, otherwise false

tlsSocket.authorizationError#

The reason why the peer's certificate has not been verified. This property becomes available only when tlsSocket.authorized === false.

tlsSocket.getPeerCertificate()#

Returns an object representing the peer's certificate. The returned object has some properties corresponding to the field of the certificate.

Example:

示例:

{ subject: 
   { C: 'UK',
     ST: 'Acknack Ltd',
     L: 'Rhys Jones',
     O: 'node.js',
     OU: 'Test TLS Certificate',
     CN: 'localhost' },
  issuer: 
   { C: 'UK',
     ST: 'Acknack Ltd',
     L: 'Rhys Jones',
     O: 'node.js',
     OU: 'Test TLS Certificate',
     CN: 'localhost' },
  valid_from: 'Nov 11 09:52:22 2009 GMT',
  valid_to: 'Nov  6 09:52:22 2029 GMT',
  fingerprint: '2A:7A:C2:DD:E5:F9:CC:53:72:35:99:7A:02:5A:71:38:52:EC:8A:DF' }

If the peer does not provide a certificate, it returns null or an empty object.

tlsSocket.getCipher()#

Returns an object representing the cipher name and the SSL/TLS protocol version of the current connection.

Example: { name: 'AES256-SHA', version: 'TLSv1/SSLv3' }

Example: { name: 'AES256-SHA', version: 'TLSv1/SSLv3' }

See SSL_CIPHER_get_name() and SSL_CIPHER_get_version() in http://www.openssl.org/docs/ssl/ssl.html#DEALING_WITH_CIPHERS for more information.

tlsSocket.renegotiate(options, callback)#

Initiate TLS renegotiation process. The options may contain the following fields: rejectUnauthorized, requestCert (See tls.createServer for details). callback(err) will be executed with null as err, once the renegotiation is successfully completed.

NOTE: Can be used to request peer's certificate after the secure connection has been established.

ANOTHER NOTE: When running as the server, socket will be destroyed with an error after handshakeTimeout timeout.

tlsSocket.address()#

Returns the bound address, the address family name and port of the underlying socket as reported by the operating system. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

tlsSocket.remoteAddress#

The string representation of the remote IP address. For example, '74.125.127.100' or '2001:4860:a005::68'.

远程IP地址的字符串表示。例如,'74.125.127.100''2001:4860:a005::68'

tlsSocket.remotePort#

The numeric representation of the remote port. For example, 443.

远程端口的数值表示。例如, 443

tlsSocket.localAddress#

The string representation of the local IP address.

本地IP地址的字符串表达。

tlsSocket.localPort#

The numeric representation of the local port.

本地端口的数值表示。

字符串解码器#

稳定度: 3 - 稳定

To use this module, do require('string_decoder'). StringDecoder decodes a buffer to a string. It is a simple interface to buffer.toString() but provides additional support for utf8.

通过 require('string_decoder') 使用这个模块。这个模块将一个 Buffer 解码成一个字符串。他是 buffer.toString() 的一个简单接口,但提供对 utf8 的支持。

var euro = new Buffer([0xE2, 0x82, 0xAC]);
console.log(decoder.write(euro));

类: StringDecoder#

Accepts a single argument, encoding which defaults to utf8.

接受 encoding 一个参数,默认是 utf8

decoder.write(buffer)#

Returns a decoded string.

返回解码后的字符串。

decoder.end()#

Returns any trailing bytes that were left in the buffer.

返回 Buffer 中剩下的末尾字节。

File System#

稳定度: 3 - 稳定

File I/O is provided by simple wrappers around standard POSIX functions. To use this module do require('fs'). All the methods have asynchronous and synchronous forms.

文件系统模块是一个简单包装的标准 POSIX 文件 I/O 操作方法集。您可以通过调用require('fs')来获取该模块。文件系统模块中的所有方法均有异步和同步版本。

The asynchronous form always take a completion callback as its last argument. The arguments passed to the completion callback depend on the method, but the first argument is always reserved for an exception. If the operation was completed successfully, then the first argument will be null or undefined.

文件系统模块中的异步方法需要一个完成时的回调函数作为最后一个传入形参。 回调函数的构成由您调用的异步方法所决定,通常情况下回调函数的第一个形参为返回的错误信息。 如果异步操作执行正确并返回,该错误形参则为null或者undefined

When using the synchronous form any exceptions are immediately thrown. You can use try/catch to handle exceptions or allow them to bubble up.

如果您使用的是同步版本的操作方法,则一旦出现错误,会以通常的抛出错误的形式返回错误。 你可以用trycatch等语句来拦截错误并使程序继续进行。

Here is an example of the asynchronous version:

这里是一个异步版本的例子:

fs.unlink('/tmp/hello', function (err) {
  if (err) throw err;
  console.log('successfully deleted /tmp/hello');
});

Here is the synchronous version:

这是同步版本的例子:

fs.unlinkSync('/tmp/hello')
console.log('successfully deleted /tmp/hello');

With the asynchronous methods there is no guaranteed ordering. So the following is prone to error:

当使用异步版本时不能保证执行顺序,因此下面这个例子很容易出错:

fs.rename('/tmp/hello', '/tmp/world', function (err) {
  if (err) throw err;
  console.log('renamed complete');
});
fs.stat('/tmp/world', function (err, stats) {
  if (err) throw err;
  console.log('stats: ' + JSON.stringify(stats));
});

It could be that fs.stat is executed before fs.rename. The correct way to do this is to chain the callbacks.

fs.stat有可能在fs.rename前执行.要等到正确的执行顺序应该用下面的方法:

fs.rename('/tmp/hello', '/tmp/world', function (err) {
  if (err) throw err;
  fs.stat('/tmp/world', function (err, stats) {
    if (err) throw err;
    console.log('stats: ' + JSON.stringify(stats));
  });
});

In busy processes, the programmer is strongly encouraged to use the asynchronous versions of these calls. The synchronous versions will block the entire process until they complete--halting all connections.

在繁重的任务中,强烈推荐使用这些函数的异步版本.同步版本会阻塞进程,直到完成处理,也就是说会暂停所有的连接.

Relative path to filename can be used, remember however that this path will be relative to process.cwd().

可以使用文件名的相对路径, 但是记住这个路径是相对于process.cwd()的.

Most fs functions let you omit the callback argument. If you do, a default callback is used that rethrows errors. To get a trace to the original call site, set the NODE_DEBUG environment variable:

大部分的文件系统(fs)函数可以忽略回调函数(callback)这个参数.如果忽略它,将会由一个默认回调函数(callback)来重新抛出(rethrow)错误.要获得原调用点的堆栈跟踪(trace)信息,需要在环境变量里设置NODE_DEBUG.

$ env NODE_DEBUG=fs node script.js
fs.js:66
        throw err;
              ^
Error: EISDIR, read
    at rethrow (fs.js:61:21)
    at maybeCallback (fs.js:79:42)
    at Object.fs.readFile (fs.js:153:18)
    at bad (/path/to/script.js:2:17)
    at Object.<anonymous> (/path/to/script.js:5:1)
    <etc.>

fs.rename(oldPath, newPath, callback)#

Asynchronous rename(2). No arguments other than a possible exception are given to the completion callback.

异步版本的rename函数(2).完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.renameSync(oldPath, newPath)#

Synchronous rename(2).

同步版本的rename(2).

fs.ftruncate(fd, len, callback)#

Asynchronous ftruncate(2). No arguments other than a possible exception are given to the completion callback.

异步版本的ftruncate(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.ftruncateSync(fd, len)#

Synchronous ftruncate(2).

同步版本的ftruncate(2).

fs.truncate(path, len, callback)#

Asynchronous truncate(2). No arguments other than a possible exception are given to the completion callback.

异步版本的truncate(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.truncateSync(path, len)#

Synchronous truncate(2).

同步版本的truncate(2).

异步版本的chown.完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

Asynchronous chown(2). No arguments other than a possible exception are given to the completion callback.

异步版本的chown(2).完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.chownSync(path, uid, gid)#

Synchronous chown(2).

同步版本的chown(2).

fs.fchown(fd, uid, gid, callback)#

Asynchronous fchown(2). No arguments other than a possible exception are given to the completion callback.

异步版本的fchown(2)。回调函数的参数除了出现错误时有一个错误对象外,没有其它参数。

fs.fchownSync(fd, uid, gid)#

Synchronous fchown(2).

同步版本的fchown(2).

fs.lchown(path, uid, gid, callback)#

Asynchronous lchown(2). No arguments other than a possible exception are given to the completion callback.

异步版的lchown(2)。完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.lchownSync(path, uid, gid)#

Synchronous lchown(2).

同步版本的lchown(2).

fs.chmod(path, mode, callback)#

Asynchronous chmod(2). No arguments other than a possible exception are given to the completion callback.

异步版的 chmod(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.chmodSync(path, mode)#

Synchronous chmod(2).

同步版的 chmod(2).

fs.fchmod(fd, mode, callback)#

Asynchronous fchmod(2). No arguments other than a possible exception are given to the completion callback.

异步版的 fchmod(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.fchmodSync(fd, mode)#

Synchronous fchmod(2).

同步版的 fchmod(2).

fs.lchmod(path, mode, callback)#

Asynchronous lchmod(2). No arguments other than a possible exception are given to the completion callback.

异步版的 lchmod(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

Only available on Mac OS X.

仅在 Mac OS X 系统下可用。

fs.lchmodSync(path, mode)#

Synchronous lchmod(2).

同步版的 lchmod(2).

fs.stat(path, callback)#

Asynchronous stat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. See the fs.Stats section below for more information.

异步版的 stat(2). 回调函数(callback) 接收两个参数: (err, stats) ,其中 stats 是一个 fs.Stats 对象。 详情请参考 fs.Stats

fs.lstat(path, callback)#

Asynchronous lstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. lstat() is identical to stat(), except that if path is a symbolic link, then the link itself is stat-ed, not the file that it refers to.

异步版的 lstat(2). 回调函数(callback)接收两个参数: (err, stats) 其中 stats 是一个 fs.Stats 对象。 lstat()stat() 相同,区别在于: 若 path 是一个符号链接时(symbolic link),读取的是该符号链接本身,而不是它所 链接到的文件。

fs.fstat(fd, callback)#

Asynchronous fstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. fstat() is identical to stat(), except that the file to be stat-ed is specified by the file descriptor fd.

异步版的 fstat(2). 回调函数(callback)接收两个参数: (err, stats) 其中 stats 是一个 fs.Stats 对象。 fstat()stat() 相同,区别在于: 要读取的文件(译者注:即第一个参数)是一个文件描述符(file descriptor) fd

fs.statSync(path)#

Synchronous stat(2). Returns an instance of fs.Stats.

同步版的 stat(2). 返回一个 fs.Stats 实例。

fs.lstatSync(path)#

Synchronous lstat(2). Returns an instance of fs.Stats.

同步版的 lstat(2). 返回一个 fs.Stats 实例。

fs.fstatSync(fd)#

Synchronous fstat(2). Returns an instance of fs.Stats.

同步版的 fstat(2). 返回一个 fs.Stats 实例。

fs.link(srcpath, dstpath, callback)#

Asynchronous link(2). No arguments other than a possible exception are given to the completion callback.

异步版的 link(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。

fs.linkSync(srcpath, dstpath)#

Synchronous link(2).

同步版的 link(2).

fs.symlink(srcpath, dstpath, [type], callback)#

Asynchronous symlink(2). No arguments other than a possible exception are given to the completion callback. type argument can be either 'dir', 'file', or 'junction' (default is 'file'). It is only used on Windows (ignored on other platforms). Note that Windows junction points require the destination path to be absolute. When using 'junction', the destination argument will automatically be normalized to absolute path.

异步版的 symlink(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。 type 可以是 'dir', 'file', 或者'junction' (默认是 'file'),此参数仅用于 Windows 系统(其他系统平台会被忽略)。 注意: Windows 系统要求目标路径(译者注:即 dstpath 参数)必须是一个绝对路径,当使用 'junction' 时,dstpath 参数会自动转换为绝对路径。

fs.symlinkSync(srcpath, dstpath, [type])#

Synchronous symlink(2).

同步版的 symlink(2).

fs.readlink(path, callback)#

Asynchronous readlink(2). The callback gets two arguments (err, linkString).

异步版的 readlink(2). 回调函数(callback)接收两个参数: (err, linkString).

fs.readlinkSync(path)#

Synchronous readlink(2). Returns the symbolic link's string value.

同步版的 readlink(2). 返回符号链接(symbolic link)的字符串值。

fs.realpath(path, [cache], callback)#

Asynchronous realpath(2). The callback gets two arguments (err, resolvedPath). May use process.cwd to resolve relative paths. cache is an object literal of mapped paths that can be used to force a specific path resolution or avoid additional fs.stat calls for known real paths.

异步版的 realpath(2). 回调函数(callback)接收两个参数: (err, resolvedPath). May use process.cwd to resolve relative paths. cache is an object literal of mapped paths that can be used to force a specific path resolution or avoid additional fs.stat calls for known real paths.

Example:

示例:

var cache = {'/etc':'/private/etc'};
fs.realpath('/etc/passwd', cache, function (err, resolvedPath) {
  if (err) throw err;
  console.log(resolvedPath);
});

fs.realpathSync(path, [cache])#

Synchronous realpath(2). Returns the resolved path.

realpath(2) 的同步版本。返回解析出的路径。

fs.unlink(path, callback)#

Asynchronous unlink(2). No arguments other than a possible exception are given to the completion callback.

异步版的 unlink(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.unlinkSync(path)#

Synchronous unlink(2).

同步版的 unlink(2).

fs.rmdir(path, callback)#

Asynchronous rmdir(2). No arguments other than a possible exception are given to the completion callback.

异步版的 rmdir(2). 异步版的 link(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。

fs.rmdirSync(path)#

Synchronous rmdir(2).

同步版的 rmdir(2).

fs.mkdir(path, [mode], callback)#

Asynchronous mkdir(2). No arguments other than a possible exception are given to the completion callback. mode defaults to 0777.

异步版的 mkdir(2)。 异步版的 link(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。文件 mode 默认为 0777

fs.mkdirSync(path, [mode])#

Synchronous mkdir(2).

同步版的 mkdir(2)。

fs.readdir(path, callback)#

Asynchronous readdir(3). Reads the contents of a directory. The callback gets two arguments (err, files) where files is an array of the names of the files in the directory excluding '.' and '..'.

异步版的 readdir(3)。 读取 path 路径所在目录的内容。 回调函数 (callback) 接受两个参数 (err, files) 其中 files 是一个存储目录中所包含的文件名称的数组,数组中不包括 '.''..'

fs.readdirSync(path)#

Synchronous readdir(3). Returns an array of filenames excluding '.' and '..'.

同步版的 readdir(3). 返回文件名数组,其中不包括 '.''..' 目录.

fs.close(fd, callback)#

Asynchronous close(2). No arguments other than a possible exception are given to the completion callback.

异步版 close(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.closeSync(fd)#

Synchronous close(2).

同步版的 close(2).

fs.open(path, flags, [mode], callback)#

Asynchronous file open. See open(2). flags can be:

异步版的文件打开. 详见 open(2). flags 可以是:

  • 'r' - Open file for reading. An exception occurs if the file does not exist.

  • 'r' - 以【只读】的方式打开文件. 当文件不存在时产生异常.

  • 'r+' - Open file for reading and writing. An exception occurs if the file does not exist.

  • 'r+' - 以【读写】的方式打开文件. 当文件不存在时产生异常.

  • 'rs' - Open file for reading in synchronous mode. Instructs the operating system to bypass the local file system cache.

  • 'rs' - 同步模式下,以【只读】的方式打开文件. 指令绕过操作系统的本地文件系统缓存.

    This is primarily useful for opening files on NFS mounts as it allows you to skip the potentially stale local cache. It has a very real impact on I/O performance so don't use this flag unless you need it.

该功能主要用于打开 NFS 挂载的文件, 因为它可以让你跳过默认使用的过时本地缓存. 但这实际上非常影响 I/O 操作的性能, 因此除非你确实有这样的需求, 否则请不要使用该标志.

Note that this doesn't turn fs.open() into a synchronous blocking call. If that's what you want then you should be using fs.openSync()

注意: 这并不意味着 fs.open() 变成了一个同步阻塞的请求. 如果你想要一个同步阻塞的请求你应该使用 fs.openSync().

  • 'rs+' - Open file for reading and writing, telling the OS to open it synchronously. See notes for 'rs' about using this with caution.

  • 'rs+' - 同步模式下, 以【读写】的方式打开文件. 请谨慎使用该方式, 详细请查看 'rs' 的注释.

  • 'w' - Open file for writing. The file is created (if it does not exist) or truncated (if it exists).

  • 'w' - 以【只写】的形式打开文件. 文件会被创建 (如果文件不存在) 或者覆盖 (如果存在).

  • 'wx' - Like 'w' but fails if path exists.

  • 'wx' - 类似 'w' 区别是如果文件存在则操作会失败.

  • 'w+' - Open file for reading and writing. The file is created (if it does not exist) or truncated (if it exists).

  • 'w+' - 以【读写】的方式打开文件. 文件会被创建 (如果文件不存在) 或者覆盖 (如果存在).

  • 'wx+' - Like 'w+' but fails if path exists.

  • 'wx+' - 类似 'w+' 区别是如果文件存在则操作会失败.

  • 'a' - Open file for appending. The file is created if it does not exist.

  • 'a' - 以【附加】的形式打开文件,即新写入的数据会附加在原来的文件内容之后. 如果文件不存在则会默认创建.

  • 'ax' - Like 'a' but fails if path exists.

  • 'ax' - 类似 'a' 区别是如果文件存在则操作会失败.

  • 'a+' - Open file for reading and appending. The file is created if it does not exist.

  • 'a+' - 以【读取】和【附加】的形式打开文件. 如果文件不存在则会默认创建.

  • 'ax+' - Like 'a+' but fails if path exists.

  • 'ax+' - 类似 'a+' 区别是如果文件存在则操作会失败.

mode sets the file mode (permission and sticky bits), but only if the file was created. It defaults to 0666, readable and writeable.

参数 mode 用于设置文件模式 (permission and sticky bits), 不过前提是这个文件是已存在的. 默认情况下是 0666, 有可读和可写权限.

The callback gets two arguments (err, fd).

该 callback 接收两个参数 (err, fd).

The exclusive flag 'x' (O_EXCL flag in open(2)) ensures that path is newly created. On POSIX systems, path is considered to exist even if it is a symlink to a non-existent file. The exclusive flag may or may not work with network file systems.

排除 (exclusive) 标识 'x' (对应 open(2) 的 O_EXCL 标识) 保证 path 是一个新建的文件。 POSIX 操作系统上,即使 path 是一个指向不存在位置的符号链接,也会被认定为文件存在。 排除标识在网络文件系统不能确定是否有效。

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

在 Linux 上,无法对以追加 (append) 模式打开的文件进行指定位置的写入操作。 内核会忽略位置参数并且总是将数据追加到文件尾部。

fs.openSync(path, flags, [mode])#

Synchronous version of fs.open().

fs.open() 的同步版.

fs.utimes(path, atime, mtime, callback)#

fs.utimesSync(path, atime, mtime)#

fs.utimes(path, atime, mtime, callback)#

fs.utimesSync(path, atime, mtime)#

Change file timestamps of the file referenced by the supplied path.

更改 path 所指向的文件的时间戳。

fs.futimes(fd, atime, mtime, callback)#

fs.futimesSync(fd, atime, mtime)#

fs.futimes(fd, atime, mtime, callback)#

fs.futimesSync(fd, atime, mtime)#

Change the file timestamps of a file referenced by the supplied file descriptor.

更改文件描述符 (file discriptor) 所指向的文件的时间戳。

fs.fsync(fd, callback)#

Asynchronous fsync(2). No arguments other than a possible exception are given to the completion callback.

异步版本的 fsync(2)。回调函数仅含有一个异常 (exception) 参数。

fs.fsyncSync(fd)#

Synchronous fsync(2).

fsync(2) 的同步版本。

fs.write(fd, buffer, offset, length[, position], callback)#

Write buffer to the file specified by fd.

通过文件标识fd,向指定的文件中写入buffer

offset and length determine the part of the buffer to be written.

offsetlength 可以确定从哪个位置开始写入buffer。

position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number', the data will be written at the current position. See pwrite(2).

position 是参考当前文档光标的位置,然后从该处写入数据。如果typeof position !== 'number',那么数据会从当前文档位置写入,请看pwrite(2)。

The callback will be given three arguments (err, written, buffer) where written specifies how many bytes were written from buffer.

回调中会给出三个参数 (err, written, buffer)written 说明从buffer写入的字节数。

Note that it is unsafe to use fs.write multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.

注意,fs.write多次地在同一个文件中使用而没有等待回调是不安全的。在这种情况下,强烈推荐使用fs.createWriteStream

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

在 Linux 上,无法对以追加 (append) 模式打开的文件进行指定位置的写入操作。 内核会忽略位置参数并且总是将数据追加到文件尾部。

fs.write(fd, data[, position[, encoding]], callback)#

Write data to the file specified by fd. If data is not a Buffer instance then the value will be coerced to a string.

data写入到文档中通过指定的fd,如果data不是buffer对象的实例则会把值强制转化成一个字符串。

position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number' the data will be written at the current position. See pwrite(2).

position 是参考当前文档光标的位置,然后从该处写入数据。如果typeof position !== 'number',那么数据会从当前文档位置写入,请看pwrite(2)。

encoding is the expected string encoding.

encoding 是预期得到一个字符串编码

The callback will receive the arguments (err, written, string) where written specifies how many bytes the passed string required to be written. Note that bytes written is not the same as string characters. See Buffer.byteLength.

回调会得到这些参数 (err, written, string)written表明传入的string需要写入的字符串长度。注意字节的写入跟字符串写入是不一样的。请看Buffer.byteLength.

Unlike when writing buffer, the entire string must be written. No substring may be specified. This is because the byte offset of the resulting data may not be the same as the string offset.

与写入buffer不同,必须写入完整的字符串,截取字符串不是符合规定的。这是因为返回的字节的位移跟字符串的位移是不一样的。

Note that it is unsafe to use fs.write multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.

注意,fs.write多次地在同一个文件中使用而没有等待回调是不安全的。在这种情况下,强烈推荐使用fs.createWriteStream

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

在 Linux 上,无法对以追加 (append) 模式打开的文件进行指定位置的写入操作。 内核会忽略位置参数并且总是将数据追加到文件尾部。

fs.writeSync(fd, buffer, offset, length[, position])#

fs.writeSync(fd, data[, position[, encoding]])#

Synchronous versions of fs.write(). Returns the number of bytes written.

同步版本的fs.write()。返回写入的字节数。

fs.read(fd, buffer, offset, length, position, callback)#

Read data from the file specified by fd.

从指定的文档标识符fd读取文件数据。

buffer is the buffer that the data will be written to.

buffer 是缓冲区,数据将会写入这里。

offset is the offset in the buffer to start writing at.

offset 是开始向缓冲区 buffer 写入的偏移量。

length is an integer specifying the number of bytes to read.

length 是一个整形值,指定了读取的字节数。

position is an integer specifying where to begin reading from in the file. If position is null, data will be read from the current file position.

position 是一个整形值,指定了从哪里开始读取文件,如果positionnull,将会从文件当前的位置读取数据。

The callback is given the three arguments, (err, bytesRead, buffer).

回调函数给定了三个参数, (err, bytesRead, buffer), 分别为错误,读取的字节和缓冲区。

fs.readSync(fd, buffer, offset, length, position)#

Synchronous version of fs.read. Returns the number of bytesRead.

fs.read 函数的同步版本。 返回bytesRead的个数。

fs.readFile(filename, [options], callback)#

  • filename String
  • options Object
    • encoding String | Null default = null
    • flag String default = 'r'
  • callback Function

  • filename String

  • options Object
    • encoding String | Null default = null
    • flag String default = 'r'
  • callback Function

Asynchronously reads the entire contents of a file. Example:

异步读取一个文件的全部内容。举例:

fs.readFile('/etc/passwd', function (err, data) {
  if (err) throw err;
  console.log(data);
});

The callback is passed two arguments (err, data), where data is the contents of the file.

回调函数传递了两个参数 (err, data), data 就是文件的内容。

If no encoding is specified, then the raw buffer is returned.

如果未指定编码方式,原生buffer就会被返回。

fs.readFileSync(filename, [options])#

Synchronous version of fs.readFile. Returns the contents of the filename.

fs.readFile的同步版本。 返回文件名为 filename 的文件内容。

If the encoding option is specified then this function returns a string. Otherwise it returns a buffer.

如果 encoding 选项被指定, 那么这个函数返回一个字符串。如果未指定,则返回一个原生buffer。

fs.writeFile(filename, data, [options], callback)#

  • filename String
  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'w'
  • callback Function

  • filename String

  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'w'
  • callback Function

Asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer.

异步的将数据写入一个文件, 如果文件原先存在,会被替换。 data 可以是一个string,也可以是一个原生buffer。

The encoding option is ignored if data is a buffer. It defaults to 'utf8'.

encoding 选项会被忽视如果 data 不是string而是原生buffer。encoding缺省为 'utf8'

Example:

示例:

fs.writeFile('message.txt', 'Hello Node', function (err) {
  if (err) throw err;
  console.log('It\'s saved!'); //文件被保存
});

fs.writeFileSync(filename, data, [options])#

The synchronous version of fs.writeFile.

fs.writeFile的同步版本。

fs.appendFile(filename, data, [options], callback)#

  • filename String
  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'a'
  • callback Function

  • filename String

  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'a'
  • callback Function

Asynchronously append data to a file, creating the file if it not yet exists. data can be a string or a buffer.

异步的将数据添加到一个文件的尾部,如果文件不存在,会创建一个新的文件。 data 可以是一个string,也可以是原生buffer。

Example:

示例:

fs.appendFile('message.txt', 'data to append', function (err) {
  if (err) throw err;
  console.log('The "data to append" was appended to file!'); //数据被添加到文件的尾部
});

fs.appendFileSync(filename, data, [options])#

The synchronous version of fs.appendFile.

fs.appendFile的同步版本。

fs.watchFile(filename, [options], listener)#

稳定性: 2 - 不稳定.   尽可能的话推荐使用 fs.watch 来代替。

Watch for changes on filename. The callback listener will be called each time the file is accessed.

监视filename指定的文件的改变. 回调函数 listener 会在文件每一次被访问时被调用。

The second argument is optional. The options if provided should be an object containing two members a boolean, persistent, and interval. persistent indicates whether the process should continue to run as long as files are being watched. interval indicates how often the target should be polled, in milliseconds. The default is { persistent: true, interval: 5007 }.

第二个参数是可选的。 如果提供此参数,options 应该是包含两个成员persistentinterval的对象,其中persistent值为boolean类型。persistent指定进程是否应该在文件被监视(watch)时继续运行,interval指定了目标文件被查询的间隔,以毫秒为单位。缺省值为{ persistent: true, interval: 5007 }

The listener gets two arguments the current stat object and the previous stat object:

listener 有两个参数,第一个为文件现在的状态,第二个为文件的前一个状态。

fs.watchFile('message.text', function (curr, prev) {
  console.log('the current mtime is: ' + curr.mtime);
  console.log('the previous mtime was: ' + prev.mtime);
});

These stat objects are instances of fs.Stat.

listener中的文件状态对象类型为fs.Stat

If you want to be notified when the file was modified, not just accessed you need to compare curr.mtime and prev.mtime.

如果你只想在文件被修改时被告知,而不是仅仅在被访问时就告知,你应当在listener回调函数中比较下两个状态对象的mtime属性。即curr.mtimeprev.mtime.

fs.unwatchFile(filename, [listener])#

稳定性: 2 - 不稳定.   尽可能的话推荐使用 fs.watch 来代替。

Stop watching for changes on filename. If listener is specified, only that particular listener is removed. Otherwise, all listeners are removed and you have effectively stopped watching filename.

停止监视文件名为 filename的文件. 如果 listener 参数被指定, 会移除在fs.watchFile函数中指定的那一个listener回调函数。 否则, 所有的 回调函数都会被移除,你将彻底停止监视filename文件。

Calling fs.unwatchFile() with a filename that is not being watched is a no-op, not an error.

调用 fs.unwatchFile() 时,传递的文件名为未被监视的文件时,不会发生错误,而会发生一个no-op。

fs.watch(filename, [options], [listener])#

稳定性: 2 - 不稳定的

Watch for changes on filename, where filename is either a file or a directory. The returned object is a fs.FSWatcher.

观察指定路径的改变,filename 路径可以是文件或者目录。改函数返回的对象是 fs.FSWatcher

The second argument is optional. The options if provided should be an object containing a boolean member persistent, which indicates whether the process should continue to run as long as files are being watched. The default is { persistent: true }.

第二个参数是可选的. 如果 options 选项被提供那么它应当是一个只包含成员persistent得对象, persistent为boolean类型。persistent指定了进程是否“只要文件被监视就继续执行”缺省值为 { persistent: true }.

The listener callback gets two arguments (event, filename). event is either 'rename' or 'change', and filename is the name of the file which triggered the event.

监听器的回调函数得到两个参数 (event, filename)。其中 event 是 'rename'(重命名)或者 'change'(改变),而 filename 则是触发事件的文件名。

注意事项#

The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.

fs.watch 不是完全跨平台的,且在某些情况下不可用。

可用性#

This feature depends on the underlying operating system providing a way to be notified of filesystem changes.

此功能依赖于操作系统底层提供的方法来监视文件系统的变化。

  • On Linux systems, this uses inotify.
  • On BSD systems (including OS X), this uses kqueue.
  • On SunOS systems (including Solaris and SmartOS), this uses event ports.
  • On Windows systems, this feature depends on ReadDirectoryChangesW.

  • 在 Linux 操作系统上,使用 inotify

  • 在 BSD 操作系统上 (包括 OS X),使用 kqueue
  • 在 SunOS 操作系统上 (包括 Solaris 和 SmartOS),使用 event ports
  • 在 Windows 操作系统上,该特性依赖于 ReadDirectoryChangesW

If the underlying functionality is not available for some reason, then fs.watch will not be able to function. For example, watching files or directories on network file systems (NFS, SMB, etc.) often doesn't work reliably or at all.

如果系统底层函数出于某些原因不可用,那么 fs.watch 也就无法工作。例如,监视网络文件系统(如 NFS, SMB 等)的文件或者目录,就时常不能稳定的工作,有时甚至完全不起作用。

You can still use fs.watchFile, which uses stat polling, but it is slower and less reliable.

你仍然可以调用使用了文件状态调查的 fs.watchFile,但是会比较慢而且比较不可靠。

文件名参数#

Providing filename argument in the callback is not supported on every platform (currently it's only supported on Linux and Windows). Even on supported platforms filename is not always guaranteed to be provided. Therefore, don't assume that filename argument is always provided in the callback, and have some fallback logic if it is null.

在回调函数中提供的 filename 参数不是在每一个操作系统中都被支持(当下仅在Linux和Windows上支持)。 即便是在支持的系统中,filename也不能保证在每一次回调都被提供。因此,不要假设filename参数总会会在 回调函数中提供,在回调函数中添加检测filename是否为null的if判断语句。

fs.watch('somedir', function (event, filename) {
  console.log('event is: ' + event);
  if (filename) {
    console.log('filename provided: ' + filename);
  } else {
    console.log('filename not provided');
  }
});

fs.exists(path, callback)#

Test whether or not the given path exists by checking with the file system. Then call the callback argument with either true or false. Example:

检查指定路径的文件或者目录是否存在。接着通过 callback 传入的参数指明存在 (true) 或者不存在 (false)。示例:

fs.exists('/etc/passwd', function (exists) {
  util.debug(exists ? "存在" : "不存在!");
});

fs.existsSync(path)#

Synchronous version of fs.exists.

fs.exists 函数的同步版。

Class: fs.Stats#

Objects returned from fs.stat(), fs.lstat() and fs.fstat() and their synchronous counterparts are of this type.

fs.stat(), fs.lstat()fs.fstat() 以及他们对应的同步版本返回的对象。

  • stats.isFile()
  • stats.isDirectory()
  • stats.isBlockDevice()
  • stats.isCharacterDevice()
  • stats.isSymbolicLink() (only valid with fs.lstat())
  • stats.isFIFO()
  • stats.isSocket()

  • stats.isFile()

  • stats.isDirectory()
  • stats.isBlockDevice()
  • stats.isCharacterDevice()
  • stats.isSymbolicLink() (仅在与 fs.lstat()一起使用时合法)
  • stats.isFIFO()
  • stats.isSocket()

For a regular file util.inspect(stats) would return a string very similar to this:

对于一个普通文件使用 util.inspect(stats) 将会返回一个类似如下输出的字符串:

{ dev: 2114,
  ino: 48064969,
  mode: 33188,
  nlink: 1,
  uid: 85,
  gid: 100,
  rdev: 0,
  size: 527,
  blksize: 4096,
  blocks: 8,
  atime: Mon, 10 Oct 2011 23:24:11 GMT,
  mtime: Mon, 10 Oct 2011 23:24:11 GMT,
  ctime: Mon, 10 Oct 2011 23:24:11 GMT,
  birthtime: Mon, 10 Oct 2011 23:24:11 GMT }

Please note that atime, mtime, birthtime, and ctime are instances of Date object and to compare the values of these objects you should use appropriate methods. For most general uses getTime() will return the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC and this integer should be sufficient for any comparison, however there additional methods which can be used for displaying fuzzy information. More details can be found in the MDN JavaScript Reference page.

请注意 atime, mtime, birthtime, and ctimeDate 对象的实例。而且在比较这些对象的值时你应当使用合适的方法。 大部分情况下,使用 getTime() 将会返回自 1 January 1970 00:00:00 UTC 以来逝去的毫秒数, 而且这个整形值应该能满足任何比较的使用条件。但是仍然还有一些额外的方法可以用来显示一些模糊的信息。更多细节请查看 MDN JavaScript Reference 页面。

Stat Time Values#

The times in the stat object have the following semantics:

在状态对象(stat object)中的时间有以下语义:

  • atime "Access Time" - Time when file data last accessed. Changed by the mknod(2), utimes(2), and read(2) system calls.
  • mtime "Modified Time" - Time when file data last modified. Changed by the mknod(2), utimes(2), and write(2) system calls.
  • ctime "Change Time" - Time when file status was last changed (inode data modification). Changed by the chmod(2), chown(2), link(2), mknod(2), rename(2), unlink(2), utimes(2), read(2), and write(2) system calls.
  • birthtime "Birth Time" - Time of file creation. Set once when the file is created. On filesystems where birthtime is not available, this field may instead hold either the ctime or 1970-01-01T00:00Z (ie, unix epoch timestamp 0). On Darwin and other FreeBSD variants, also set if the atime is explicitly set to an earlier value than the current birthtime using the utimes(2) system call.

  • atime "Access Time" - 文件数据上次被访问的时间.
    会被 mknod(2), utimes(2), and read(2) 等系统调用改变。

  • mtime "Modified Time" - 文件上次被修改的时间。 会被 mknod(2), utimes(2), and write(2) 等系统调用改变。
  • ctime "Change Time" - 文件状态上次改变的时间。 (inode data modification). 会被 chmod(2), chown(2), link(2), mknod(2), rename(2), unlink(2), utimes(2), read(2), and write(2) 等系统调用改变。
  • birthtime "Birth Time" - 文件被创建的时间。 会在文件被创建时生成。 在一些不提供文件birthtime的文件系统中, 这个字段会被 ctime1970-01-01T00:00Z (ie, unix epoch timestamp 0)来填充。 在 Darwin 和其他 FreeBSD 系统变体中, 也将 atime 显式地设置成比它现在的 birthtime 更早的一个时间值,这个过程使用了utimes(2)系统调用。

Prior to Node v0.12, the ctime held the birthtime on Windows systems. Note that as of v0.12, ctime is not "creation time", and on Unix systems, it never was.

在Node v0.12版本之前, ctime 持有Windows系统的 birthtime 值. 注意在v.0.12版本中, ctime 不再是"creation time", 而且在Unix系统中,他从来都不是。

fs.createReadStream(path, [options])#

Returns a new ReadStream object (See Readable Stream).

返回一个新的 ReadStream 对象 (详见 Readable Stream).

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ flags: 'r',
  encoding: null,
  fd: null,
  mode: 0666,
  autoClose: true
}

options can include start and end values to read a range of bytes from the file instead of the entire file. Both start and end are inclusive and start at 0. The encoding can be 'utf8', 'ascii', or 'base64'.

options 可以提供 startend 值用于读取文件内的特定范围而非整个文件。 startend 都是包含在范围内的(inclusive, 可理解为闭区间)并且以 0 开始。 encoding 可选为 'utf8', 'ascii' 或者 'base64'

If autoClose is false, then the file descriptor won't be closed, even if there's an error. It is your responsibility to close it and make sure there's no file descriptor leak. If autoClose is set to true (default behavior), on error or end the file descriptor will be closed automatically.

如果 autoClose 为 false 则即使在发生错误时也不会关闭文件描述符 (file descriptor)。 此时你需要负责关闭文件,避免文件描述符泄露 (leak)。 如果 autoClose 为 true (缺省值), 当发生 error 或者 end 事件时,文件描述符会被自动释放。

An example to read the last 10 bytes of a file which is 100 bytes long:

一个从100字节的文件中读取最后10字节的例子:

fs.createReadStream('sample.txt', {start: 90, end: 99});

Class: fs.ReadStream#

ReadStream is a Readable Stream.

ReadStream 是一个可读的流(Readable Stream).

事件: 'open'#

  • fd Integer file descriptor used by the ReadStream.

  • fd 整形 ReadStream 所使用的文件描述符。

Emitted when the ReadStream's file is opened.

当文件的 ReadStream 被创建时触发。

fs.createWriteStream(path, [options])#

Returns a new WriteStream object (See Writable Stream).

返回一个新的 WriteStream 对象 (详见 Writable Stream).

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ flags: 'w',
  encoding: null,
  mode: 0666 }

options may also include a start option to allow writing data at some position past the beginning of the file. Modifying a file rather than replacing it may require a flags mode of r+ rather than the default mode w.

options 也可以包含一个 start 选项用于指定在文件中开始写入数据的位置。 修改而不替换文件需要 flags 的模式指定为 r+ 而不是默值的 w.

Class: fs.WriteStream#

WriteStream is a Writable Stream.

WriteStream 是一个可写的流(Writable Stream).

事件: 'open'#

  • fd Integer file descriptor used by the WriteStream.

  • fd 整形 WriteStream 所使用的文件描述符。

Emitted when the WriteStream's file is opened.

当 WriteStream 创建时触发。

file.bytesWritten#

The number of bytes written so far. Does not include data that is still queued for writing.

已写的字节数。不包含仍在队列中准备写入的数据。

Class: fs.FSWatcher#

Objects returned from fs.watch() are of this type.

fs.watch() 返回的对象类型。

watcher.close()#

Stop watching for changes on the given fs.FSWatcher.

停止观察 fs.FSWatcher 对象中的更改。

事件: 'change'#

  • event String The type of fs change
  • filename String The filename that changed (if relevant/available)

  • event 字符串 fs 改变的类型

  • filename 字符串 改变的文件名 (if relevant/available)

Emitted when something changes in a watched directory or file. See more details in fs.watch.

当正在观察的目录或文件发生变动时触发。更多细节,详见 fs.watch

事件: 'error'#

  • error Error object

  • error Error 对象

Emitted when an error occurs.

当产生错误时触发

路径 (Path)#

稳定度: 3 - 稳定

This module contains utilities for handling and transforming file paths. Almost all these methods perform only string transformations. The file system is not consulted to check whether paths are valid.

本模块包含一套用于处理和转换文件路径的工具集。几乎所有的方法仅对字符串进行转换, 文件系统是不会检查路径是否真实有效的。

Use require('path') to use this module. The following methods are provided:

通过 require('path') 来加载此模块。以下是本模块所提供的方法:

path.normalize(p)#

Normalize a string path, taking care of '..' and '.' parts.

规范化字符串路径,注意 '..' 和 `'.' 部分

When multiple slashes are found, they're replaced by a single one; when the path contains a trailing slash, it is preserved. On Windows backslashes are used.

当发现有多个连续的斜杠时,会替换成一个; 当路径末尾包含斜杠时,会保留; 在 Windows 系统会使用反斜杠。

Example:

示例:

path.normalize('/foo/bar//baz/asdf/quux/..')
// returns
'/foo/bar/baz/asdf'

path.join([path1], [path2], [...])#

Join all arguments together and normalize the resulting path.

组合参数中的所有路径,返回规范化后的路径。

Arguments must be strings. In v0.8, non-string arguments were silently ignored. In v0.10 and up, an exception is thrown.

参数必须是字符串。在 v0.8 版本非字符串参数会被悄悄忽略。 在 v0.10 及以后版本将会抛出一个异常。

Example:

示例:

path.join('foo', {}, 'bar')
// 抛出异常
TypeError: Arguments to path.join must be strings

path.resolve([from ...], to)#

Resolves to to an absolute path.

to 参数解析为一个绝对路径。

If to isn't already absolute from arguments are prepended in right to left order, until an absolute path is found. If after using all from paths still no absolute path is found, the current working directory is used as well. The resulting path is normalized, and trailing slashes are removed unless the path gets resolved to the root directory. Non-string arguments are ignored.

如果to不是一个相对于from 参数的绝对路径,to会被添加到from的右边,直到找出一个绝对路径为止。如果使用from路径且仍没有找到绝对路径时,使用当时路径作为目录。返回的结果已经规范化,得到的路径会去掉结尾的斜杠,除非得到的当前路径为root目录。非字符串参数将被忽略。

Another way to think of it is as a sequence of cd commands in a shell.

另一种思考方式就是像在shell里面用一系列的‘cd’命令一样.

path.resolve('foo/bar', '/tmp/file/', '..', 'a/../subfile')

Is similar to:

相当于:

cd foo/bar
cd /tmp/file/
cd ..
cd a/../subfile
pwd

The difference is that the different paths don't need to exist and may also be files.

不同的是,不同的路径不需要存在的,也可能是文件。

Examples:

示例:

path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif')
// 如果当前工作目录为 /home/myself/node,它返回:
'/home/myself/node/wwwroot/static_files/gif/image.gif'

path.isAbsolute(path)#

Determines whether path is an absolute path. An absolute path will always resolve to the same location, regardless of the working directory.

判定path是否为绝对路径。一个绝对路径总是指向一个相同的位置,无论当前工作目录是在哪里。

Posix examples:

Posix 示例:

path.isAbsolute('/foo/bar') // true
path.isAbsolute('/baz/..')  // true
path.isAbsolute('qux/')     // false
path.isAbsolute('.')        // false

Windows examples:

Windows 示例:

path.isAbsolute('//server')  // true
path.isAbsolute('C:/foo/..') // true
path.isAbsolute('bar\\baz')   // false
path.isAbsolute('.')         // false

path.relative(from, to)#

Solve the relative path from from to to.

解决从fromto的相对路径。

At times we have two absolute paths, and we need to derive the relative path from one to the other. This is actually the reverse transform of path.resolve, which means we see that:

有时我们有2个绝对路径, 我们需要从中找出相对目录的起源目录。这完全是path.resolve的相反实现,我们可以看看是什么意思:

path.resolve(from, path.relative(from, to)) == path.resolve(to)

Examples:

示例:

path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb')
// 返回
'../../impl/bbb'

path.dirname(p)#

Return the directory name of a path. Similar to the Unix dirname command.

返回路径中文件夹的名称. 类似于Unix的dirname 命令.

Example:

示例:

path.dirname('/foo/bar/baz/asdf/quux')
// returns
'/foo/bar/baz/asdf'

path.basename(p, [ext])#

Return the last portion of a path. Similar to the Unix basename command.

返回路径中的最后哦一部分. 类似于Unix 的 basename 命令.

Example:

示例:

path.basename('/foo/bar/baz/asdf/quux.html', '.html')
// returns
'quux'

path.extname(p)#

Return the extension of the path, from the last '.' to end of string in the last portion of the path. If there is no '.' in the last portion of the path or the first character of it is '.', then it returns an empty string. Examples:

返回路径中文件的扩展名, 在从最后一部分中的最后一个'.'到字符串的末尾。 如果在路径的最后一部分没有'.',或者第一个字符是'.',就返回一个 空字符串。 例子:

path.extname('index')
// returns
''

path.sep#

The platform-specific file separator. '\\' or '/'.

特定平台的文件分隔工具. '\\' 或者 '/'.

An example on *nix:

*nix 上的例子:

'foo/bar/baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

An example on Windows:

Windows 上的例子:

'foo\\bar\\baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

path.delimiter#

The platform-specific path delimiter, ; or ':'.

特定平台的路径分隔符, ; 或者 ':'.

An example on *nix:

*nix 上的例子:

process.env.PATH.split(path.delimiter)
// returns
['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']

An example on Windows:

Windows 上的例子:

console.log(process.env.PATH)
// 'C:\Windows\system32;C:\Windows;C:\Program Files\nodejs\'

process.env.PATH.split(path.delimiter)
// returns
['C:\Windows\system32', 'C:\Windows', 'C:\Program Files\nodejs\']


process.env.PATH.split(path.delimiter)
// returns
['C:\Windows\system32', 'C:\Windows', 'C:\Program Files\nodejs\']

网络#

稳定度: 3 - 稳定

The net module provides you with an asynchronous network wrapper. It contains methods for creating both servers and clients (called streams). You can include this module with require('net');

net 模块封装了异步网络功能,提供了一些方法来创建服务器和客户端(称之为流)。您可以用 require('net') 来引入这个模块。

net.createServer([options], [connectionListener])#

Creates a new TCP server. The connectionListener argument is automatically set as a listener for the 'connection' event.

创建一个新的 TCP 服务器。参数 connectionListener 会被自动作为 'connection' 事件的监听器。

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ allowHalfOpen: false
}

If allowHalfOpen is true, then the socket won't automatically send a FIN packet when the other end of the socket sends a FIN packet. The socket becomes non-readable, but still writable. You should call the end() method explicitly. See 'end' event for more information.

如果允许半开连接 allowHalfOpen 被设置为 true,则当另一端的套接字发送 FIN 报文时套接字并不会自动发送 FIN 报文。套接字会变为不可读,但仍然可写。您应当明确地调用 end() 方法。详见 'end' 事件。

Here is an example of an echo server which listens for connections on port 8124:

下面是一个监听 8124 端口连接的应答服务器的例子:

var net = require('net');
var server = net.createServer(function(c) { // 'connection' 监听器
  console.log('服务器已连接');
  c.on('end', function() {
    console.log('服务器已断开');
  });
  c.write('hello\r\n');
  c.pipe(c);
});
server.listen(8124, function() { // 'listening' 监听器
  console.log('服务器已绑定');
});

Test this by using telnet:

使用 telnet 测试:

telnet localhost 8124

To listen on the socket /tmp/echo.sock the third line from the last would just be changed to

要监听套接字 /tmp/echo.sock 仅需更改倒数第三行代码:

server.listen('/tmp/echo.sock', function() { // 'listening' 监听器

Use nc to connect to a UNIX domain socket server:

使用 nc 连接到一个 UNIX domain 套接字服务器:

nc -U /tmp/echo.sock

net.connect(options, [connectionListener])#

net.createConnection(options, [connectionListener])#

net.connect(options, [connectionListener])#

net.createConnection(options, [connectionListener])#

Constructs a new socket object and opens the socket to the given location. When the socket is established, the 'connect' event will be emitted.

构建一个新的套接字对象并打开所给位置的套接字。当套接字就绪时会触发 'connect' 事件。

For TCP sockets, options argument should be an object which specifies:

对于 TCP 套接字,选项 options 参数应为一个指定下列参数的对象:

  • port: Port the client should connect to (Required).

  • port:客户端连接到的端口(必须)

  • host: Host the client should connect to. Defaults to 'localhost'.

  • host:客户端连接到的主机,缺省为 'localhost'

  • localAddress: Local interface to bind to for network connections.

  • localAddress:网络连接绑定的本地接口

  • family : Version of IP stack. Defaults to 4.

  • family:IP 栈版本,缺省为 4

For UNIX domain sockets, options argument should be an object which specifies:

对于 UNIX domain 套接字,选项 options 参数应当为一个指定下列参数的对象:

  • path: Path the client should connect to (Required).

  • path:客户端连接到的路径(必须)

Common options are:

通用选项:

  • allowHalfOpen: if true, the socket won't automatically send a FIN packet when the other end of the socket sends a FIN packet. Defaults to false. See 'end' event for more information.

  • allowHalfOpen:允许半开连接,如果被设置为 true,则当另一端的套接字发送 FIN 报文时套接字并不会自动发送 FIN 报文。缺省为 false。详见 'end' 事件。

The connectListener parameter will be added as an listener for the 'connect' event.

connectListener 用于 'connect' 事件的监听器

Here is an example of a client of echo server as described previously:

下面是一个上述应答服务器的客户端的例子:

var net = require('net');
var client = net.connect({port: 8124},
    function() { //'connect' 监听器
  console.log('client connected');
  client.write('world!\r\n');
});
client.on('data', function(data) {
  console.log(data.toString());
  client.end();
});
client.on('end', function() {
  console.log('客户端断开连接');
});

To connect on the socket /tmp/echo.sock the second line would just be changed to

要连接到套接字 /tmp/echo.sock,仅需将第二行改为

var client = net.connect({path: '/tmp/echo.sock'},

net.connect(port, [host], [connectListener])#

net.createConnection(port, [host], [connectListener])#

net.connect(port, [host], [connectListener])#

net.createConnection(port, [host], [connectListener])#

Creates a TCP connection to port on host. If host is omitted, 'localhost' will be assumed. The connectListener parameter will be added as an listener for the 'connect' event.

创建一个 host 主机 port 端口的 TCP 连接。如果省略 host 则假定为 'localhost'connectListener 参数会被用作 'connect' 事件的监听器。

net.connect(path, [connectListener])#

net.createConnection(path, [connectListener])#

net.connect(path, [connectListener])#

net.createConnection(path, [connectListener])#

Creates unix socket connection to path. The connectListener parameter will be added as an listener for the 'connect' event.

创建一个到路径 path 的 UNIX 套接字连接。connectListener 参数会被用作 'connect' 事件的监听器。

类: net.Server#

This class is used to create a TCP or UNIX server. A server is a net.Socket that can listen for new incoming connections.

该类用于创建一个 TCP 或 UNIX 服务器。服务器本质上是一个可监听传入连接的 net.Socket

server.listen(port, [host], [backlog], [callback])#

Begin accepting connections on the specified port and host. If the host is omitted, the server will accept connections directed to any IPv4 address (INADDR_ANY). A port value of zero will assign a random port.

在指定端口 port 和主机 host 上开始接受连接。如果省略 host 则服务器会接受来自所有 IPv4 地址(INADDR_ANY)的连接;端口为 0 则会使用分随机分配的端口。

Backlog is the maximum length of the queue of pending connections. The actual length will be determined by your OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on linux. The default value of this parameter is 511 (not 512).

积压量 backlog 为连接等待队列的最大长度。实际长度由您的操作系统通过 sysctl 设置决定,比如 Linux 上的 tcp_max_syn_backlogsomaxconn。该参数缺省值为 511(不是 512)。

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. The last parameter callback will be added as an listener for the 'listening' event.

这是一个异步函数。当服务器已被绑定时会触发 'listening' 事件。最后一个参数 callback 会被用作 'listening' 事件的监听器。

One issue some users run into is getting EADDRINUSE errors. This means that another server is already running on the requested port. One way of handling this would be to wait a second and then try again. This can be done with

有些用户会遇到的情况是遇到 'EADDINUSE' 错误。这表示另一个服务器已经运行在所请求的端口上。一个处理这种情况的方法是等待一段时间再重试

server.on('error', function (e) {
  if (e.code == 'EADDRINUSE') {
    console.log('地址被占用,重试中...');
    setTimeout(function () {
      server.close();
      server.listen(PORT, HOST);
    }, 1000);
  }
});

(Note: All sockets in Node set SO_REUSEADDR already)

(注意:Node 中的所有套接字已设置了 SO_REUSEADDR

server.listen(path, [callback])#

Start a UNIX socket server listening for connections on the given path.

启动一个 UNIX 套接字服务器在所给路径 path 上监听连接。

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. The last parameter callback will be added as an listener for the 'listening' event.

这是一个异步函数。当服务器已被绑定时会触发 'listening' 事件。最后一个参数 callback 会被用作 'listening' 事件的监听器。

server.listen(handle, [callback])#

  • handle Object
  • callback Function

  • handle处理器 Object

  • callback回调函数 Function

The handle object can be set to either a server or socket (anything with an underlying _handle member), or a {fd: <n>} object.

handle 变量可以被设置为server 或者 socket(任一以下划线开头的成员 _handle), 或者一个 {fd: <n>} 对象

This will cause the server to accept connections on the specified handle, but it is presumed that the file descriptor or handle has already been bound to a port or domain socket.

这将使服务器用指定的句柄接受连接,但它假设文件描述符或者句柄已经被绑定在特定的端口或者域名套接字。

Listening on a file descriptor is not supported on Windows.

Windows 不支持监听一个文件描述符。

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. the last parameter callback will be added as an listener for the 'listening' event.

这是一个异步函数。当服务器已被绑定时会触发 'listening' 事件。最后一个参数 callback 会被用作 'listening' 事件的监听器。

server.close([callback])#

Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a 'close' event. Optionally, you can pass a callback to listen for the 'close' event.

用于停止服务器接受新连接,但保持已存在的连接。这是一个异步函数, 服务器将在所有的连接都结束后关闭,并且服务器发送 'close'事件 你可以有选择的传入回调函数来监听 'close'事件。

server.address()#

Returns the bound address, the address family name and port of the server as reported by the operating system. Useful to find which port was assigned when giving getting an OS-assigned address. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

返回操作系统报告的绑定的地址,协议族和端口。 对查找操作系统分配的地址哪个端口已被分配非常有用, 如. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

Example:

示例:

// 获得随机端口
server.listen(function() {
  address = server.address();
  console.log("opened server on %j", address);
});

Don't call server.address() until the 'listening' event has been emitted.

'listening' 事件发生前请勿调用 server.address()

server.unref()#

Calling unref on a server will allow the program to exit if this is the only active server in the event system. If the server is already unrefd calling unref again will have no effect.

如果这是事件系统中唯一一个活动的服务器,调用 unref 将允许程序退出。如果服务器已被 unref,则再次调用 unref 并不会产生影响。

server.ref()#

Opposite of unref, calling ref on a previously unrefd server will not let the program exit if it's the only server left (the default behavior). If the server is refd calling ref again will have no effect.

unref 相反,如果这是仅剩的服务器,在一个之前被 unref 了的服务器上调用 ref 将不会让程序退出(缺省行为)。如果服务器已经被 ref,则再次调用 ref 并不会产生影响。

server.maxConnections#

Set this property to reject connections when the server's connection count gets high.

设置这个选项能在当服务器连接数超过数量时拒绝连接。

It is not recommended to use this option once a socket has been sent to a child with child_process.fork().

这个选项不推荐使用在套接字已经用 child_process.fork()发送给子进程。

server.connections#

This function is deprecated; please use [server.getConnections()][] instead. The number of concurrent connections on the server.

这个函数已被 废弃; 请用 [server.getConnections()][] 代替. 服务器的当前活动连接的数量。

This becomes null when sending a socket to a child with child_process.fork(). To poll forks and get current number of active connections use asynchronous server.getConnections instead.

当用child_process.fork()发送一个套接字给子进程时,它将是 null 。 要轮询子进程来获取当前活动的连接请用 server.getConnections 代替.

net.Server is an EventEmitter with the following events:

net.Server 是一个包含下列事件的 EventEmitter :

server.getConnections(callback)#

Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.

异步获取服务器当前活跃的连接数. 用于套接字呗发送给子进程。

Callback should take two arguments err and count.

回调函数需要两个参数 errcount.

事件: 'listening'#

Emitted when the server has been bound after calling server.listen.

在服务器调用 server.listen绑定后触发。

事件: 'connection'#

  • Socket object The connection object

  • Socket object 连接对象

Emitted when a new connection is made. socket is an instance of net.Socket.

在一个新连接被创建时触发。 socket 是一个net.Socket的实例。

事件: 'close'#

Emitted when the server closes. Note that if connections exist, this event is not emitted until all connections are ended.

当服务被关闭时触发. 注意:如果当前仍有活动连接,他个事件将等到所有连接都结束后才触发。

事件: 'error'#

  • Error Object

  • Error Object

Emitted when an error occurs. The 'close' event will be called directly following this event. See example in discussion of server.listen.

当一个错误发生时触发。 'close' 事件将直接被下列时间调用。 请查看讨论 server.listen的例子。

类: net.Socket#

This object is an abstraction of a TCP or UNIX socket. net.Socket instances implement a duplex Stream interface. They can be created by the user and used as a client (with connect()) or they can be created by Node and passed to the user through the 'connection' event of a server.

这个对象是一个TCP或UNIX套接字的抽象。 net.Socket 实例实现了一个双工流接口。 他们可以被用户使用在客户端(使用 connect()) 或者它们可以由 Node创建,并通过 'connection'服务器事件传递给用户。

new net.Socket([options])#

Construct a new socket object.

构造一个新的套接字对象。

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ fd: null
  type: null
  allowHalfOpen: false
}

fd allows you to specify the existing file descriptor of socket. type specified underlying protocol. It can be 'tcp4', 'tcp6', or 'unix'. About allowHalfOpen, refer to createServer() and 'end' event.

fd 允许你指定一个存在的文件描述符和套接字。 type 指定一个优先的协议。 他可以是 'tcp4', 'tcp6', 或 'unix'. 关于 allowHalfOpen, 参见 createServer()'end' 事件。

socket.connect(port, [host], [connectListener])#

socket.connect(path, [connectListener])#

socket.connect(port, [host], [connectListener])#

socket.connect(path, [connectListener])#

Opens the connection for a given socket. If port and host are given, then the socket will be opened as a TCP socket, if host is omitted, localhost will be assumed. If a path is given, the socket will be opened as a unix socket to that path.

使用传入的套接字打开一个连接 如果 porthost 都被传入, 那么套接字将会被已TCP套接字打开,如果 host 被省略, 默认为localhost . 如果 path 被传入, 套接字将会被已指定路径UNIX套接字打开。

Normally this method is not needed, as net.createConnection opens the socket. Use this only if you are implementing a custom Socket.

一般情况下这个函数是不需要使用, 比如用 net.createConnection 打开套接字. 只有在您实现了自定义套接字时候才需要。

This function is asynchronous. When the 'connect' event is emitted the socket is established. If there is a problem connecting, the 'connect' event will not be emitted, the 'error' event will be emitted with the exception.

这是一个异步函数。 当 'connect' 触发了的套接字是established状态 .或者在连接的时候出现了一个问题, 'connect' 事件不会被触发, 而 'error' 事件会触发并发送异常信息。

The connectListener parameter will be added as an listener for the 'connect' event.

connectListener 用于 'connect' 事件的监听器

socket.bufferSize#

net.Socket has the property that socket.write() always works. This is to help users get up and running quickly. The computer cannot always keep up with the amount of data that is written to a socket - the network connection simply might be too slow. Node will internally queue up the data written to a socket and send it out over the wire when it is possible. (Internally it is polling on the socket's file descriptor for being writable).

是一个net.Socket 的属性,用于 socket.write() . 用于帮助用户获取更快的运行速度。 计算机不能一直处于大量数据被写入状态 —— 网络链接可能会变得过慢。 Node 在内部会排队等候数据被写入套接字并确保传输连接上的数据完好。 (内部实现为:轮询套接字的文件描述符等待它为可写).

The consequence of this internal buffering is that memory may grow. This property shows the number of characters currently buffered to be written. (Number of characters is approximately equal to the number of bytes to be written, but the buffer may contain strings, and the strings are lazily encoded, so the exact number of bytes is not known.)

内部缓冲的可能后果是内存使用会增加。这个属性表示了现在处于缓冲区等待被写入的字符数。(字符的数目约等于要被写入的字节数,但是缓冲区可能包含字符串,而字符串是惰性编码的,所以确切的字节数是未知的。)

Users who experience large or growing bufferSize should attempt to "throttle" the data flows in their program with pause() and resume().

遇到数值很大或者增长很快的bufferSize的时候,用户应该尝试用pause()resume()来控制数据流。

socket.setEncoding([encoding])#

Set the encoding for the socket as a Readable Stream. See stream.setEncoding() for more information.

设置套接字的编码为一个可读流. 更多信息请查看 stream.setEncoding()

socket.write(data, [encoding], [callback])#

Sends data on the socket. The second parameter specifies the encoding in the case of a string--it defaults to UTF8 encoding.

在套接字上发送数据。第二参数指明了使用字符串时的编码方式-默认为UTF8编码。

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is again free.

如果所有数据被成功刷新到内核缓冲区,则返回true。如果所有或部分数据在用户内存里还处于队列中,则返回false。当缓冲区再次被释放时,'drain'事件会被分发。 'drain' will be emitted when the buffer is again free.

The optional callback parameter will be executed when the data is finally written out - this may not be immediately.

当数据最终被完整写入时,可选的callback参数会被执行 - 但不一定是马上执行。

socket.end([data], [encoding])#

Half-closes the socket. i.e., it sends a FIN packet. It is possible the server will still send some data.

半关闭套接字 如., 它发送一个 FIN 包 .可能服务器仍在发送数据。

If data is specified, it is equivalent to calling socket.write(data, encoding) followed by socket.end().

如果 data被传入, 等同于调用 socket.write(data, encoding) 然后调用 socket.end().

socket.destroy()#

Ensures that no more I/O activity happens on this socket. Only necessary in case of errors (parse error or so).

确保没有I/O活动在这个套接字。 只有在错误发生情况下才需要(处理错误等等)。

socket.pause()#

Pauses the reading of data. That is, 'data' events will not be emitted. Useful to throttle back an upload.

暂停读取数据。 'data' 事件不会被触发。 对于控制上传非常有用。

socket.resume()#

Resumes reading after a call to pause().

在调用 pause()后恢复读操作。

socket.setTimeout(timeout, [callback])#

Sets the socket to timeout after timeout milliseconds of inactivity on the socket. By default net.Socket do not have a timeout.

如果套接字超过timeout毫秒处于闲置状态,则将套接字设为超时。默认情况下net.Socket不存在超时。

When an idle timeout is triggered the socket will receive a 'timeout' event but the connection will not be severed. The user must manually end() or destroy() the socket.

当一个闲置超时被触发时,套接字会接收到一个'timeout'事件,但是连接将不会被断开。用户必须手动end()destroy()这个套接字。

If timeout is 0, then the existing idle timeout is disabled.

如果timeout为0,那么现有的闲置超时会被禁用。

The optional callback parameter will be added as a one time listener for the 'timeout' event.

可选的callback参数将会被添加成为'timeout'事件的一次性监听器。

socket.setNoDelay([noDelay])#

Disables the Nagle algorithm. By default TCP connections use the Nagle algorithm, they buffer data before sending it off. Setting true for noDelay will immediately fire off data each time socket.write() is called. noDelay defaults to true.

禁用纳格(Nagle)算法。默认情况下TCP连接使用纳格算法,这些连接在发送数据之前对数据进行缓冲处理。 将noDelay设成true会在每次socket.write()被调用时立刻发送数据。noDelay默认为true

socket.setKeepAlive([enable], [initialDelay])#

Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket. enable defaults to false.

禁用/启用长连接功能,并在第一个在闲置套接字上的长连接probe被发送之前,可选地设定初始延时。enable默认为false

Set initialDelay (in milliseconds) to set the delay between the last data packet received and the first keepalive probe. Setting 0 for initialDelay will leave the value unchanged from the default (or previous) setting. Defaults to 0.

设定initialDelay (毫秒),来设定在收到的最后一个数据包和第一个长连接probe之间的延时。将initialDelay设成0会让值保持不变(默认值或之前所设的值)。默认为0

socket.address()#

Returns the bound address, the address family name and port of the socket as reported by the operating system. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

返回 socket 绑定的IP地址, 协议类型 (family name) 以及 端口号 (port). 具体是一个包含三个属性的对象, 形如 { port: 12346, family: 'IPv4', address: '127.0.0.1' }

socket.unref()#

Calling unref on a socket will allow the program to exit if this is the only active socket in the event system. If the socket is already unrefd calling unref again will have no effect.

如果这是事件系统中唯一一个活动的套接字,调用 unref 将允许程序退出。如果套接字已被 unref,则再次调用 unref 并不会产生影响。

socket.ref()#

Opposite of unref, calling ref on a previously unrefd socket will not let the program exit if it's the only socket left (the default behavior). If the socket is refd calling ref again will have no effect.

unref 相反,如果这是仅剩的套接字,在一个之前被 unref 了的套接字上调用 ref不会让程序退出(缺省行为)。如果一个套接字已经被 ref,则再次调用 ref 并不会产生影响。

socket.remoteAddress#

The string representation of the remote IP address. For example, '74.125.127.100' or '2001:4860:a005::68'.

远程IP地址的字符串表示。例如,'74.125.127.100''2001:4860:a005::68'

socket.remotePort#

The numeric representation of the remote port. For example, 80 or 21.

远程端口的数值表示。例如,8021

socket.localAddress#

The string representation of the local IP address the remote client is connecting on. For example, if you are listening on '0.0.0.0' and the client connects on '192.168.1.1', the value would be '192.168.1.1'.

远程客户端正在连接的本地IP地址的字符串表示。例如,如果你在监听'0.0.0.0'而客户端连接在'192.168.1.1',这个值就会是 '192.168.1.1'

socket.localPort#

The numeric representation of the local port. For example, 80 or 21.

本地端口的数值表示。比如8021

socket.bytesRead#

The amount of received bytes.

所接收的字节数。

socket.bytesWritten#

The amount of bytes sent.

所发送的字节数。

net.Socket instances are EventEmitter with the following events:

net.Socket实例是带有以下事件的EventEmitter对象:

事件: 'lookup'#

Emitted after resolving the hostname but before connecting. Not applicable to UNIX sockets.

这个事件在解析主机名之后,连接主机之前被分发。对UNIX套接字不适用。

  • err {Error | Null} The error object. See [dns.lookup()][].
  • address {String} The IP address.
  • family {String | Null} The address type. See [dns.lookup()][].

  • err {Error | Null} 错误对象。见[dns.lookup()][]。

  • address {String} IP地址。
  • family {String | Null} 得知类型。见[dns.lookup()][]。

事件: 'connect'#

Emitted when a socket connection is successfully established. See connect().

该事件在一个套接字连接成功建立后被分发。见connect()

事件: 'data'#

  • Buffer object

  • Buffer object

Emitted when data is received. The argument data will be a Buffer or String. Encoding of data is set by socket.setEncoding(). (See the Readable Stream section for more information.)

当收到数据时被分发。data参数会是一个BufferString对象。数据的编码方式由socket.setEncoding()设定。 (详见 [可读流][] 章节)

Note that the data will be lost if there is no listener when a Socket emits a 'data' event.

请注意,如果一个Socket对象分发一个'data'事件时没有任何监听器存在,则 数据会丢失

事件: 'end'#

Emitted when the other end of the socket sends a FIN packet.

当套接字的另一端发送FIN包时,该事件被分发。

By default (allowHalfOpen == false) the socket will destroy its file descriptor once it has written out its pending write queue. However, by setting allowHalfOpen == true the socket will not automatically end() its side allowing the user to write arbitrary amounts of data, with the caveat that the user is required to end() their side now.

默认情况下 (allowHalfOpen == false),当套接字完成待写入队列中的任务时,它会destroy文件描述符。然而,如果把allowHalfOpen设成true,那么套接字将不会从它这边自动调用end(),使得用户可以随意写入数据,但同时使得用户自己需要调用end()

事件: 'timeout'#

Emitted if the socket times out from inactivity. This is only to notify that the socket has been idle. The user must manually close the connection.

当套接字因为非活动状态而超时时该事件被分发。这只是用来表明套接字处于空闲状态。用户必须手动关闭这个连接。

See also: socket.setTimeout()

参阅:socket.setTimeout()

事件: 'drain'#

Emitted when the write buffer becomes empty. Can be used to throttle uploads.

当写入缓冲被清空时产生。可被用于控制上传流量。

See also: the return values of socket.write()

参阅:socket.write() 的返回值

事件: 'error'#

  • Error object

  • Error object

Emitted when an error occurs. The 'close' event will be called directly following this event.

当一个错误发生时产生。'close' 事件会紧接着该事件被触发。

事件: 'close'#

  • had_error Boolean true if the socket had a transmission error

  • had_error Boolean 如果套接字发生了传输错误则此字段为true

Emitted once the socket is fully closed. The argument had_error is a boolean which says if the socket was closed due to a transmission error.

当套接字完全关闭时该事件被分发。参数had_error是一个布尔值,表示了套接字是否因为一个传输错误而被关闭。

net.isIP(input)#

Tests if input is an IP address. Returns 0 for invalid strings, returns 4 for IP version 4 addresses, and returns 6 for IP version 6 addresses.

测试 input 是否 IP 地址。无效字符串返回 0;IP 版本 4 地址返回 4;IP 版本 6 地址返回 6。

net.isIPv4(input)#

Returns true if input is a version 4 IP address, otherwise returns false.

如果 input 为版本 4 地址则返回 true,否则返回 false。

net.isIPv6(input)#

Returns true if input is a version 6 IP address, otherwise returns false.

如果 input 为版本 6 地址则返回 true,否则返回 false。

UDP / 数据报套接字#

稳定度: 3 - 稳定

Datagram sockets are available through require('dgram').

数据报套接字通过 require('dgram') 提供。

Important note: the behavior of dgram.Socket#bind() has changed in v0.10 and is always asynchronous now. If you have code that looks like this:

重要提醒:dgram.Socket#bind() 的行为在 v0.10 中已改变,并且现在它总是异步的。如果您的代码看起来像这样:

var s = dgram.createSocket('udp4');
s.bind(1234);
s.addMembership('224.0.0.114');

You have to change it to this:

您需要将它改成这样:

var s = dgram.createSocket('udp4');
s.bind(1234, function() {
  s.addMembership('224.0.0.114');
});

dgram.createSocket(type, [callback])#

  • type String. Either 'udp4' or 'udp6'
  • callback Function. Attached as a listener to message events. Optional
  • Returns: Socket object

  • type String 可以是 'udp4' 或 'udp6'

  • callback Function 可选,会被作为 message 事件的监听器。
  • 返回:Socket 对象

Creates a datagram Socket of the specified types. Valid types are udp4 and udp6.

创建一个指定类型的数据报 Socket。有效类型包括 udp4udp6

Takes an optional callback which is added as a listener for message events.

接受一个可选的回调,会被添加为 message 事件的监听器。

Call socket.bind if you want to receive datagrams. socket.bind() will bind to the "all interfaces" address on a random port (it does the right thing for both udp4 and udp6 sockets). You can then retrieve the address and port with socket.address().address and socket.address().port.

如果您想接收数据报则可调用 socket.bindsocket.bind() 会绑定到“所有网络接口”地址的一个随机端口(udp4udp6 皆是如此)。然后您可以通过 socket.address().addresssocket.address().port 来取得地址和端口。

类: dgram.Socket#

The dgram Socket class encapsulates the datagram functionality. It should be created via dgram.createSocket(type, [callback]).

dgram Socket 类封装了数据报功能,可以通过 dgram.createSocket(type, [callback]) 创建。

事件: 'message'#

  • msg Buffer object. The message
  • rinfo Object. Remote address information

  • msg Buffer 对象,消息

  • rinfo Object,远程地址信息

Emitted when a new datagram is available on a socket. msg is a Buffer and rinfo is an object with the sender's address information:

当套接字中有新的数据报时发生。msg 是一个 Bufferrinfo 是一个包含了发送者地址信息的对象:

socket.on('message', function(msg, rinfo) {
  console.log('收到 %d 字节,来自 %s:%d\n',
              msg.length, rinfo.address, rinfo.port);
});

事件: 'listening'#

Emitted when a socket starts listening for datagrams. This happens as soon as UDP sockets are created.

当一个套接字开始监听数据报时产生。它会在 UDP 套接字被创建时发生。

事件: 'close'#

Emitted when a socket is closed with close(). No new message events will be emitted on this socket.

当一个套接字被 close() 关闭时产生。之后这个套接字上不会再有 message 事件发生。

事件: 'error'#

  • exception Error object

  • exception Error 对象

Emitted when an error occurs.

当发生错误时产生。

socket.send(buf, offset, length, port, address, [callback])#

  • buf Buffer object. Message to be sent
  • offset Integer. Offset in the buffer where the message starts.
  • length Integer. Number of bytes in the message.
  • port Integer. destination port
  • address String. destination IP
  • callback Function. Callback when message is done being delivered. Optional.

  • buf Buffer 对象,要发送的消息

  • offset Integer,Buffer 中消息起始偏移值。
  • length Integer,消息的字节数。
  • port Integer,目标端口
  • address String,目标 IP
  • callback Function,可选,当消息被投递后的回调。

For UDP sockets, the destination port and IP address must be specified. A string may be supplied for the address parameter, and it will be resolved with DNS. An optional callback may be specified to detect any DNS errors and when buf may be re-used. Note that DNS lookups will delay the time that a send takes place, at least until the next tick. The only way to know for sure that a send has taken place is to use the callback.

对于 UDP 套接字,必须指定目标端口和 IP 地址。address 参数可以是一个字符串,它会被 DNS 解析。可选地可以指定一个回调以用于发现任何 DNS 错误或当 buf 可被重用。请注意 DNS 查询会将发送的时间推迟到至少下一个事件循环。确认发送完毕的唯一已知方法是使用回调。

If the socket has not been previously bound with a call to bind, it's assigned a random port number and bound to the "all interfaces" address (0.0.0.0 for udp4 sockets, ::0 for udp6 sockets).

如果套接字之前并未被调用 bind 绑定,则它会被分配一个随机端口并绑定到“所有网络接口”地址(udp4 套接字是 0.0.0.0;udp6 套接字是 ::0)。

Example of sending a UDP packet to a random port on localhost;

localhost 随机端口发送 UDP 报文的例子:

var dgram = require('dgram');
var message = new Buffer("Some bytes");
var client = dgram.createSocket("udp4");
client.send(message, 0, message.length, 41234, "localhost", function(err) {
  client.close();
});

A Note about UDP datagram size

关于 UDP 数据报大小的注意事项

The maximum size of an IPv4/v6 datagram depends on the MTU (Maximum Transmission Unit) and on the Payload Length field size.

一个 IPv4/v6 数据报的最大大小取决与 MTU最大传输单位)和 Payload Length 字段大小。

  • The Payload Length field is 16 bits wide, which means that a normal payload cannot be larger than 64K octets including internet header and data (65,507 bytes = 65,535 − 8 bytes UDP header − 20 bytes IP header); this is generally true for loopback interfaces, but such long datagrams are impractical for most hosts and networks.

  • Payload Length 字段宽 16 bits,意味着正常负载包括网络头和数据不能大于 64K(65,507 字节 = 65,535 − 8 字节 UDP 头 − 20 字节 IP 头);这对环回接口通常是真的,但如此大的数据报对大多数主机和网络来说是不切实际的。

  • The MTU is the largest size a given link layer technology can support for datagrams. For any link, IPv4 mandates a minimum MTU of 68 octets, while the recommended MTU for IPv4 is 576 (typically recommended as the MTU for dial-up type applications), whether they arrive whole or in fragments.

  • MTU 是一个给定的数据链路层技术能为数据报提供支持的最大大小。对于任何连接,IPv4 允许最小 68 字节的 MTU,而 IPv4 所推荐的 MTU576(通常作为拨号类应用的推荐 MTU),无论它们是完整接收还是分片。

    For IPv6, the minimum MTU is 1280 octets, however, the mandatory minimum fragment reassembly buffer size is 1500 octets. The value of 68 octets is very small, since most current link layer technologies have a minimum MTU of 1500 (like Ethernet).

    对于 IPv6,最小的 MTU1280 字节,但所允许的最小碎片重组缓冲大小为 1500 字节。 68 的值是非常小的,因为现在大多数数据链路层技术有都具有 1500 的最小 MTU(比如以太网)。

Note that it's impossible to know in advance the MTU of each link through which a packet might travel, and that generally sending a datagram greater than the (receiver) MTU won't work (the packet gets silently dropped, without informing the source that the data did not reach its intended recipient).

请注意我们不可能提前得知一个报文可能经过的每一个连接 MTU,因此通常情况下不能发送一个大于(接收者的)MTU 的数据报(报文会被悄悄地丢掉,而不会将数据没有到达它意图的接收者的消息告知来源)。

socket.bind(port, [address], [callback])#

  • port Integer
  • address String, Optional
  • callback Function with no parameters, Optional. Callback when binding is done.

  • port Integer

  • address String,可选
  • callback 没有参数的 Function,可选,当绑定完成时被调用。

For UDP sockets, listen for datagrams on a named port and optional address. If address is not specified, the OS will try to listen on all addresses. After binding is done, a "listening" event is emitted and the callback(if specified) is called. Specifying both a "listening" event listener and callback is not harmful but not very useful.

对于 UDP 套接字,在一个具名端口 port 和可选的地址 address 上监听数据报。如果 address 未指定,则操作系统会尝试监听所有地址。当绑定完成后,一个 "listening" 事件会发生,并且回调 callback(如果指定)会被调用。同时指定 "listening" 事件监听器和 callback 并不会产生副作用,但也没什么用。

A bound datagram socket keeps the node process running to receive datagrams.

一个绑定了的数据报套接字会保持 node 进程运行来接收数据报。

If binding fails, an "error" event is generated. In rare case (e.g. binding a closed socket), an Error may be thrown by this method.

如果绑定失败,则一个 "error" 事件会被产生。在极少情况下(比如绑定一个已关闭的套接字),该方法会抛出一个 Error

Example of a UDP server listening on port 41234:

一个监听端口 41234 的 UDP 服务器的例子:

server.bind(41234);
// 服务器正在监听 0.0.0.0:41234

socket.close()#

Close the underlying socket and stop listening for data on it.

关闭底层套接字并停止监听数据。

socket.address()#

Returns an object containing the address information for a socket. For UDP sockets, this object will contain address , family and port.

返回一个包含了套接字地址信息的对象。对于 UDP 套接字,该对象会包含地址 address、地址族 family 和端口号 port

socket.setBroadcast(flag)#

  • flag Boolean

  • flag Boolean

Sets or clears the SO_BROADCAST socket option. When this option is set, UDP packets may be sent to a local interface's broadcast address.

设置或清除 SO_BROADCAST 套接字选项。当该选项被设置,则 UDP 报文可能被发送到一个本地接口的广播地址。

socket.setTTL(ttl)#

  • ttl Integer

  • ttl Integer

Sets the IP_TTL socket option. TTL stands for "Time to Live," but in this context it specifies the number of IP hops that a packet is allowed to go through. Each router or gateway that forwards a packet decrements the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded. Changing TTL values is typically done for network probes or when multicasting.

设置 IP_TTL 套接字选项。TTL 表示“Time to Live”(生存时间),但在此上下文中它指的是报文允许通过的 IP 跃点数。各个转发报文的路由器或网关都会递减 TTL。如果 TTL 被一个路由器递减到 0,则它将不会被转发。改变 TTL 值通常被用于网络探测器或多播。

The argument to setTTL() is a number of hops between 1 and 255. The default on most systems is 64.

setTTL() 的参数为介于 1 至 255 的跃点数。在大多数系统上缺省值为 64。

socket.setMulticastTTL(ttl)#

  • ttl Integer

  • ttl Integer

Sets the IP_MULTICAST_TTL socket option. TTL stands for "Time to Live," but in this context it specifies the number of IP hops that a packet is allowed to go through, specifically for multicast traffic. Each router or gateway that forwards a packet decrements the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded.

设置 IP_MULTICAST_TTL 套接字选项。TTL 表示“Time to Live”(生存时间),但在此上下文中它指的是报文允许通过的 IP 跃点数,特别是组播流量。各个转发报文的路由器或网关都会递减 TTL。如果 TTL 被一个路由器递减到 0,则它将不会被转发。

The argument to setMulticastTTL() is a number of hops between 0 and 255. The default on most systems is 1.

setMulticastTTL() 的参数为介于 1 至 255 的跃点数。在大多数系统上缺省值为 1。

socket.setMulticastLoopback(flag)#

  • flag Boolean

  • flag Boolean

Sets or clears the IP_MULTICAST_LOOP socket option. When this option is set, multicast packets will also be received on the local interface.

设置或清除 IP_MULTICAST_LOOP 套接字选项。当该选项被设置时,组播报文也会被本地接口收到。

socket.addMembership(multicastAddress, [multicastInterface])#

  • multicastAddress String
  • multicastInterface String, Optional

  • multicastAddress String

  • multicastInterface String,可选

Tells the kernel to join a multicast group with IP_ADD_MEMBERSHIP socket option.

IP_ADD_MEMBERSHIP 套接字选项告诉内核加入一个组播分组。

If multicastInterface is not specified, the OS will try to add membership to all valid interfaces.

如果未指定 multicastInterface,则操作系统会尝试向所有有效接口添加关系。

socket.dropMembership(multicastAddress, [multicastInterface])#

  • multicastAddress String
  • multicastInterface String, Optional

  • multicastAddress String

  • multicastInterface String,可选

Opposite of addMembership - tells the kernel to leave a multicast group with IP_DROP_MEMBERSHIP socket option. This is automatically called by the kernel when the socket is closed or process terminates, so most apps will never need to call this.

addMembership 相反,以 IP_DROP_MEMBERSHIP 套接字选项告诉内核退出一个组播分组。当套接字被关闭或进程结束时内核会自动调用,因此大多数应用都没必要调用它。

If multicastInterface is not specified, the OS will try to drop membership to all valid interfaces.

如果未指定 multicastInterface,则操作系统会尝试向所有有效接口移除关系。

socket.unref()#

Calling unref on a socket will allow the program to exit if this is the only active socket in the event system. If the socket is already unrefd calling unref again will have no effect.

如果这是事件系统中唯一一个活动的套接字,调用 unref 将允许程序退出。如果套接字已被 unref,则再次调用 unref 并不会产生影响。

socket.ref()#

Opposite of unref, calling ref on a previously unrefd socket will not let the program exit if it's the only socket left (the default behavior). If the socket is refd calling ref again will have no effect.

unref 相反,如果这是仅剩的套接字,在一个之前被 unref 了的套接字上调用 ref不会让程序退出(缺省行为)。如果一个套接字已经被 ref,则再次调用 ref 并不会产生影响。

DNS#

稳定度: 3 - 稳定

Use require('dns') to access this module. All methods in the dns module use C-Ares except for dns.lookup which uses getaddrinfo(3) in a thread pool. C-Ares is much faster than getaddrinfo but the system resolver is more constant with how other programs operate. When a user does net.connect(80, 'google.com') or http.get({ host: 'google.com' }) the dns.lookup method is used. Users who need to do a large number of lookups quickly should use the methods that go through C-Ares.

使用 require('dns') 引入此模块。dns 模块中的所有方法都使用了 C-Ares,除了 dns.lookup 使用了线程池中的 getaddrinfo(3)。C-Ares 比 getaddrinfo 要快得多,但系统解析器相对于其它程序的操作要更固定。当一个用户使用 net.connect(80, 'google.com')http.get({ host: 'google.com' }) 时会使用 dns.lookup 方法。如果用户需要进行大量的快速查询,则最好使用 C-Ares 提供的方法。

Here is an example which resolves 'www.google.com' then reverse resolves the IP addresses which are returned.

下面是一个解析 'www.google.com' 并反向解析所返回 IP 地址的例子。

      console.log('反向解析 ' + a + ': ' + JSON.stringify(domains));
    });
  });
});

dns.lookup(domain, [family], callback)#

Resolves a domain (e.g. 'google.com') into the first found A (IPv4) or AAAA (IPv6) record. The family can be the integer 4 or 6. Defaults to null that indicates both Ip v4 and v6 address family.

将一个域名(比如 'google.com')解析为第一个找到的 A 记录(IPv4)或 AAAA 记录(IPv6)。地址族 family 可以是数字 46,缺省为 null 表示同时允许 IPv4 和 IPv6 地址族。

The callback has arguments (err, address, family). The address argument is a string representation of a IP v4 or v6 address. The family argument is either the integer 4 or 6 and denotes the family of address (not necessarily the value initially passed to lookup).

回调参数为 (err, address, family)。地址 address 参数为一个代表 IPv4 或 IPv6 地址的字符串。地址族 family 参数为数字 4 或 6,地表 address 的地址族(不一定是之前传入 lookup 的值)。

On error, err is an Error object, where err.code is the error code. Keep in mind that err.code will be set to 'ENOENT' not only when the domain does not exist but also when the lookup fails in other ways such as no available file descriptors.

当错误发生时,err 为一个 Error 对象,其中 err.code 为错误代码。请记住 err.code 被设定为 'ENOENT' 的情况不仅是域名不存在,也可能是查询在其它途径出错,比如没有可用文件描述符时。

dns.resolve(domain, [rrtype], callback)#

Resolves a domain (e.g. 'google.com') into an array of the record types specified by rrtype. Valid rrtypes are 'A' (IPV4 addresses, default), 'AAAA' (IPV6 addresses), 'MX' (mail exchange records), 'TXT' (text records), 'SRV' (SRV records), 'PTR' (used for reverse IP lookups), 'NS' (name server records) and 'CNAME' (canonical name records).

将一个域名(比如 'google.com')解析为一个 rrtype 指定记录类型的数组。有效 rrtypes 取值有 'A'(IPv4 地址,缺省)、'AAAA'(IPv6 地址)、'MX'(邮件交换记录)、'TXT'(文本记录)、'SRV'(SRV 记录)、'PTR'(用于 IP 反向查找)、'NS'(域名服务器记录)和 'CNAME'(别名记录)。

The callback has arguments (err, addresses). The type of each item in addresses is determined by the record type, and described in the documentation for the corresponding lookup methods below.

回调参数为 (err, addresses)。其中 addresses 中每一项的类型取决于记录类型,详见下文对应的查找方法。

On error, err is an Error object, where err.code is one of the error codes listed below.

当出错时,err 参数为一个 Error 对象,其中 err.code 为下文所列出的错误代码之一。

dns.resolve4(domain, callback)#

The same as dns.resolve(), but only for IPv4 queries (A records). addresses is an array of IPv4 addresses (e.g. ['74.125.79.104', '74.125.79.105', '74.125.79.106']).

dns.resolve() 一样,但只用于查询 IPv4(A 记录)。addresses 是一个 IPv4 地址的数组(比如 ['74.125.79.104', '74.125.79.105', '74.125.79.106'])。

dns.resolve6(domain, callback)#

The same as dns.resolve4() except for IPv6 queries (an AAAA query).

类似于 dns.resolve4(),但用于 IPv6(AAAA)查询。

dns.resolveMx(domain, callback)#

The same as dns.resolve(), but only for mail exchange queries (MX records).

类似于 dns.resolve(),但用于邮件交换查询(MX 记录)。

addresses is an array of MX records, each with a priority and an exchange attribute (e.g. [{'priority': 10, 'exchange': 'mx.example.com'},...]).

addresses 为一个 MX 记录的数组,每一项包含优先级和交换属性(比如 [{'priority': 10, 'exchange': 'mx.example.com'},...])。

dns.resolveTxt(domain, callback)#

The same as dns.resolve(), but only for text queries (TXT records). addresses is an array of the text records available for domain (e.g., ['v=spf1 ip4:0.0.0.0 ~all']).

dns.resolve() 相似,但用于文本查询(TXT 记录)。addressesdomain 可用文本记录的数组(比如 ['v=spf1 ip4:0.0.0.0 ~all'])。

dns.resolveSrv(domain, callback)#

The same as dns.resolve(), but only for service records (SRV records). addresses is an array of the SRV records available for domain. Properties of SRV records are priority, weight, port, and name (e.g., [{'priority': 10, {'weight': 5, 'port': 21223, 'name': 'service.example.com'}, ...]).

查询 SRV 记录,与 dns.resolve() 相似。 addresses 是域名 domain 可用的 SRV 记录数组, 每一条记录都包含优先级(priority)、权重(weight)、端口号(port)、服务名称(name)等属性 (比如: [{'priority': 10, {'weight': 5, 'port': 21223, 'name': 'service.example.com'}, ...])。

dns.resolveNs(domain, callback)#

The same as dns.resolve(), but only for name server records (NS records). addresses is an array of the name server records available for domain (e.g., ['ns1.example.com', 'ns2.example.com']).

查询 NS 记录,与 dns.resolve() 相似。 addresses 是域名 domain 可用的 NS 记录数组, (比如: ['ns1.example.com', 'ns2.example.com']).

dns.resolveCname(domain, callback)#

The same as dns.resolve(), but only for canonical name records (CNAME records). addresses is an array of the canonical name records available for domain (e.g., ['bar.example.com']).

查询 CNAME 记录,与 dns.resolve() 相似。 addresses 是域名 domain 可用的 CNAME 记录数组, (比如: ['bar.example.com']).

dns.reverse(ip, callback)#

Reverse resolves an ip address to an array of domain names.

反向解析 IP 地址,返回指向该 IP 地址的域名数组。

The callback has arguments (err, domains).

回调函数接收两个参数: (err, domains).

On error, err is an Error object, where err.code is one of the error codes listed below.

当出错时,err 参数为一个 Error 对象,其中 err.code 为下文所列出的错误代码之一。

dns.getServers()#

Returns an array of IP addresses as strings that are currently being used for resolution

已字符串返回一个当前用于解析的 IP 地址的数组。

dns.setServers(servers)#

Given an array of IP addresses as strings, set them as the servers to use for resolving

指定一个 IP 地址字符串数组,将它们作为解析所用的服务器。

If you specify a port with the address it will be stripped, as the underlying library doesn't support that.

如果您在地址中指定了端口,则端口会被忽略,因为底层库并不支持。

This will throw if you pass invalid input.

如果您传入无效参数,则会抛出异常。

错误代码#

Each DNS query can return one of the following error codes:

每个 DNS 查询都可能返回下列错误代码之一:

  • dns.NODATA: DNS server returned answer with no data.
  • dns.FORMERR: DNS server claims query was misformatted.
  • dns.SERVFAIL: DNS server returned general failure.
  • dns.NOTFOUND: Domain name not found.
  • dns.NOTIMP: DNS server does not implement requested operation.
  • dns.REFUSED: DNS server refused query.
  • dns.BADQUERY: Misformatted DNS query.
  • dns.BADNAME: Misformatted domain name.
  • dns.BADFAMILY: Unsupported address family.
  • dns.BADRESP: Misformatted DNS reply.
  • dns.CONNREFUSED: Could not contact DNS servers.
  • dns.TIMEOUT: Timeout while contacting DNS servers.
  • dns.EOF: End of file.
  • dns.FILE: Error reading file.
  • dns.NOMEM: Out of memory.
  • dns.DESTRUCTION: Channel is being destroyed.
  • dns.BADSTR: Misformatted string.
  • dns.BADFLAGS: Illegal flags specified.
  • dns.NONAME: Given hostname is not numeric.
  • dns.BADHINTS: Illegal hints flags specified.
  • dns.NOTINITIALIZED: c-ares library initialization not yet performed.
  • dns.LOADIPHLPAPI: Error loading iphlpapi.dll.
  • dns.ADDRGETNETWORKPARAMS: Could not find GetNetworkParams function.
  • dns.CANCELLED: DNS query cancelled.

  • dns.NODATA: DNS 服务器返回无数据应答。

  • dns.FORMERR: DNS 声称查询格式错误。
  • dns.SERVFAIL: DNS 服务器返回一般失败。
  • dns.NOTFOUND: 域名未找到。
  • dns.NOTIMP: DNS 服务器未实现所请求操作。
  • dns.REFUSED: DNS 服务器拒绝查询。
  • dns.BADQUERY: DNS 查询格式错误。
  • dns.BADNAME: 域名格式错误。
  • dns.BADFAMILY: 不支持的地址类型。
  • dns.BADRESP: DNS 答复格式错误。
  • dns.CONNREFUSED: 无法联系 DNS 服务器。
  • dns.TIMEOUT: 联系 DNS 服务器超时。
  • dns.EOF: 文件末端。
  • dns.FILE: 读取文件错误。
  • dns.NOMEM: 超出内存。
  • dns.DESTRUCTION: 通道正在被销毁。
  • dns.BADSTR: 字符串格式错误。
  • dns.BADFLAGS: 指定了非法标记。
  • dns.NONAME: 所给主机名非数字。
  • dns.BADHINTS: 指定了非法提示标记。
  • dns.NOTINITIALIZED: c-ares 库初始化尚未进行。
  • dns.LOADIPHLPAPI: 加载 iphlpapi.dll 出错。
  • dns.ADDRGETNETWORKPARAMS: 无法找到 GetNetworkParams 函数。
  • dns.CANCELLED: DNS 查询取消。

    HTTP#

    稳定度: 3 - 稳定

To use the HTTP server and client one must require('http').

要使用HTTP服务或客户端功能,需引用此模块require('http').

The HTTP interfaces in Node are designed to support many features of the protocol which have been traditionally difficult to use. In particular, large, possibly chunk-encoded, messages. The interface is careful to never buffer entire requests or responses--the user is able to stream data.

Node中的HTTP接口的设计支持许多这个协议中原本用起来很困难的特性.特别是对于很大的或者块编码的消息.这些接口很谨慎,它从来不会完全缓存整个请求(request)或响应(response),这样用户可以在请求(request)或响应(response)中使用数据流.

HTTP message headers are represented by an object like this:

HTTP 的消息头(Headers)通过如下对象来表示:

{ 'content-length': '123',
  'content-type': 'text/plain',
  'connection': 'keep-alive',
  'host': 'mysite.com',
  'accept': '*/*' }

Keys are lowercased. Values are not modified.

其中键为小写字母,值是不能修改的。

In order to support the full spectrum of possible HTTP applications, Node's HTTP API is very low-level. It deals with stream handling and message parsing only. It parses a message into headers and body but it does not parse the actual headers or the body.

为了能更加全面地支持HTTP应用,Node的HTTP API是很接近底层,它是可以处理数据流还有只转化消息。它把一个消息写到报文头和报文体,但是它并没有解析报文头或报文体。

Defined headers that allow multiple values are concatenated with a , character, except for the set-cookie and cookie headers which are represented as an array of values. Headers such as content-length which can only have a single value are parsed accordingly, and only a single value is represented on the parsed object.

定义好的消息头允许多个值以,分割, 除了set-cookiecookie,因为他们表示值的数组. 消息头,比如 content-length只能有单个值, 并且单个值表示解析好的对象.

The raw headers as they were received are retained in the rawHeaders property, which is an array of [key, value, key2, value2, ...]. For example, the previous message header object might have a rawHeaders list like the following:

接收到的原始头信息以数组形式 [key, value, key2, value2, ...] 保存在 rawHeaders 属性中. 例如, 前面提到的消息对象会有 rawHeaders 列表如下:

[ 'ConTent-Length', '123456',
  'content-LENGTH', '123',
  'content-type', 'text/plain',
  'CONNECTION', 'keep-alive',
  'Host', 'mysite.com',
  'accepT', '*/*' ]

http.STATUS_CODES#

  • Object

  • Object

A collection of all the standard HTTP response status codes, and the short description of each. For example, http.STATUS_CODES[404] === 'Not Found'.

所以标准HTTP响应码的集合,以及每个响应码的简短描述.例如:http.STATUS_CODES[404]==='Not Found'.

http.createServer([requestListener])#

Returns a new web server object.

返回一个新的web服务器对象

The requestListener is a function which is automatically added to the 'request' event.

参数 requestListener 是一个函数,它将会自动加入到 'request' 事件的监听队列.

http.createClient([port], [host])#

This function is deprecated; please use http.request() instead. Constructs a new HTTP client. port and host refer to the server to be connected to.

该函数已弃用,请用http.request()代替. 创建一个新的HTTP客户端. porthost 表示所连接的服务器.

Class: http.Server#

This is an EventEmitter with the following events:

这是一个包含下列事件的EventEmitter:

Event: 'request'#

function (request, response) { }

function (request, response) { }

Emitted each time there is a request. Note that there may be multiple requests per connection (in the case of keep-alive connections). request is an instance of http.IncomingMessage and response is an instance of http.ServerResponse

每次收到一个请求时触发.注意每个连接又可能有多个请求(在keep-alive的连接中).requesthttp.IncomingMessage的一个实例.responsehttp.ServerResponse的一个实例

事件: 'connection'#

function (socket) { }

function (socket) { }

When a new TCP stream is established. socket is an object of type net.Socket. Usually users will not want to access this event. In particular, the socket will not emit readable events because of how the protocol parser attaches to the socket. The socket can also be accessed at request.connection.

新的TCP流建立时出发。 socket是一个net.Socket对象。 通常用户无需处理该事件。 特别注意,协议解析器绑定套接字时采用的方式使套接字不会出发readable事件。 还可以通过request.connection访问socket

事件: 'close'#

function () { }

function () { }

Emitted when the server closes.

当此服务器关闭时触发

Event: 'checkContinue'#

function (request, response) { }

function (request, response) { }

Emitted each time a request with an http Expect: 100-continue is received. If this event isn't listened for, the server will automatically respond with a 100 Continue as appropriate.

每当收到Expect: 100-continue的http请求时触发。 如果未监听该事件,服务器会酌情自动发送100 Continue响应。

Handling this event involves calling response.writeContinue if the client should continue to send the request body, or generating an appropriate HTTP response (e.g., 400 Bad Request) if the client should not continue to send the request body.

处理该事件时,如果客户端可以继续发送请求主体则调用response.writeContinue, 如果不能则生成合适的HTTP响应(例如,400 请求无效)。

Note that when this event is emitted and handled, the request event will not be emitted.

需要注意到, 当这个事件触发并且被处理后, request 时间将不再会触发.

事件: 'connect'#

function (request, socket, head) { }

function (request, socket, head) { }

Emitted each time a client requests a http CONNECT method. If this event isn't listened for, then clients requesting a CONNECT method will have their connections closed.

每当客户端发起CONNECT请求时出发。如果未监听该事件,客户端发起CONNECT请求时连接会被关闭。

  • request is the arguments for the http request, as it is in the request event.
  • socket is the network socket between the server and client.
  • head is an instance of Buffer, the first packet of the tunneling stream, this may be empty.

  • request 是该HTTP请求的参数,与request事件中的相同。

  • socket 是服务端与客户端之间的网络套接字。
  • head 是一个Buffer实例,隧道流的第一个包,该参数可能为空。

After this event is emitted, the request's socket will not have a data event listener, meaning you will need to bind to it in order to handle data sent to the server on that socket.

在这个事件被分发后,请求的套接字将不会有data事件监听器,也就是说你将需要绑定一个监听器到data事件,来处理在套接字上被发送到服务器的数据。

Event: 'upgrade'#

function (request, socket, head) { }

function (request, socket, head) { }

Emitted each time a client requests a http upgrade. If this event isn't listened for, then clients requesting an upgrade will have their connections closed.

每当一个客户端请求http升级时,该事件被分发。如果这个事件没有被监听,那么这些请求升级的客户端的连接将会被关闭。

  • request is the arguments for the http request, as it is in the request event.
  • socket is the network socket between the server and client.
  • head is an instance of Buffer, the first packet of the upgraded stream, this may be empty.

  • request 是该HTTP请求的参数,与request事件中的相同。

  • socket 是服务端与客户端之间的网络套接字。
  • head 是一个Buffer实例,升级后流的第一个包,该参数可能为空。

After this event is emitted, the request's socket will not have a data event listener, meaning you will need to bind to it in order to handle data sent to the server on that socket.

在这个事件被分发后,请求的套接字将不会有data事件监听器,也就是说你将需要绑定一个监听器到data事件,来处理在套接字上被发送到服务器的数据。

Event: 'clientError'#

function (exception, socket) { }

function (exception, socket) { }

If a client connection emits an 'error' event - it will forwarded here.

如果一个客户端连接触发了一个 'error' 事件, 它就会转发到这里.

socket is the net.Socket object that the error originated from.

socket 是导致错误的 net.Socket 对象。

server.listen(port, [hostname], [backlog], [callback])#

Begin accepting connections on the specified port and hostname. If the hostname is omitted, the server will accept connections directed to any IPv4 address (INADDR_ANY).

开始在指定的主机名和端口接收连接。如果省略主机名,服务器会接收指向任意IPv4地址的链接(INADDR_ANY)。

To listen to a unix socket, supply a filename instead of port and hostname.

监听一个 unix socket, 需要提供一个文件名而不是端口号和主机名。

Backlog is the maximum length of the queue of pending connections. The actual length will be determined by your OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on linux. The default value of this parameter is 511 (not 512).

积压量 backlog 为连接等待队列的最大长度。实际长度由您的操作系统通过 sysctl 设置决定,比如 Linux 上的 tcp_max_syn_backlogsomaxconn。该参数缺省值为 511(不是 512)。

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen(port).

这个函数是异步的。最后一个参数callback会被作为事件监听器添加到 'listening'事件。另见net.Server.listen(port)

server.listen(path, [callback])#

Start a UNIX socket server listening for connections on the given path.

启动一个 UNIX 套接字服务器在所给路径 path 上监听连接。

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen(path).

该函数是异步的.最后一个参数callback将会加入到[listening][]事件的监听队列中.又见net.Server.listen(path).

server.listen(handle, [callback])#

  • handle Object
  • callback Function

  • handle处理器 Object

  • callback回调函数 Function

The handle object can be set to either a server or socket (anything with an underlying _handle member), or a {fd: <n>} object.

handle 变量可以被设置为server 或者 socket(任一以下划线开头的成员 _handle), 或者一个 {fd: <n>} 对象

This will cause the server to accept connections on the specified handle, but it is presumed that the file descriptor or handle has already been bound to a port or domain socket.

这将使服务器用指定的句柄接受连接,但它假设文件描述符或者句柄已经被绑定在特定的端口或者域名套接字。

Listening on a file descriptor is not supported on Windows.

Windows 不支持监听一个文件描述符。

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen().

这个函数是异步的。最后一个参数callback会被作为事件监听器添加到'listening'事件。另见net.Server.listen()

server.close([callback])#

Stops the server from accepting new connections. See net.Server.close().

禁止服务端接收新的连接. 查看 net.Server.close().

server.maxHeadersCount#

Limits maximum incoming headers count, equal to 1000 by default. If set to 0 - no limit will be applied.

最大请求头数目限制, 默认 1000 个. 如果设置为0, 则代表不做任何限制.

server.setTimeout(msecs, callback)#

  • msecs Number
  • callback Function

  • msecs Number

  • callback Function

Sets the timeout value for sockets, and emits a 'timeout' event on the Server object, passing the socket as an argument, if a timeout occurs.

为套接字设定超时值。如果一个超时发生,那么Server对象上会分发一个'timeout'事件,同时将套接字作为参数传递。

If there is a 'timeout' event listener on the Server object, then it will be called with the timed-out socket as an argument.

如果在Server对象上有一个'timeout'事件监听器,那么它将被调用,而超时的套接字会作为参数传递给这个监听器。

By default, the Server's timeout value is 2 minutes, and sockets are destroyed automatically if they time out. However, if you assign a callback to the Server's 'timeout' event, then you are responsible for handling socket timeouts.

默认情况下,服务器的超时时间是2分钟,超时后套接字会自动销毁。 但是如果为‘timeout’事件指定了回调函数,你需要负责处理套接字超时。

server.timeout#

  • Number Default = 120000 (2 minutes)

  • Number 默认 120000 (2 分钟)

The number of milliseconds of inactivity before a socket is presumed to have timed out.

一个套接字被判断为超时之前的闲置毫秒数。

Note that the socket timeout logic is set up on connection, so changing this value only affects new connections to the server, not any existing connections.

注意套接字的超时逻辑在连接时被设定,所以更改这个值只会影响新创建的连接,而不会影响到现有连接。

Set to 0 to disable any kind of automatic timeout behavior on incoming connections.

设置为0将阻止之后建立的连接的一切自动超时行为。

Class: http.ServerResponse#

This object is created internally by a HTTP server--not by the user. It is passed as the second parameter to the 'request' event.

The response implements the Writable Stream interface. This is an EventEmitter with the following events:

事件: 'close'#

function () { }

function () { }

Indicates that the underlying connection was terminated before response.end() was called or able to flush.

response.writeContinue()#

Sends a HTTP/1.1 100 Continue message to the client, indicating that the request body should be sent. See the 'checkContinue' event on Server.

response.writeHead(statusCode, [reasonPhrase], [headers])#

Sends a response header to the request. The status code is a 3-digit HTTP status code, like 404. The last argument, headers, are the response headers. Optionally one can give a human-readable reasonPhrase as the second argument.

向请求回复响应头. statusCode是一个三位是的HTTP状态码, 例如 404. 最后一个参数, headers, 是响应头的内容. 可以选择性的,把人类可读的‘原因短句’作为第二个参数。

Example:

示例:

var body = 'hello world';
response.writeHead(200, {
  'Content-Length': body.length,
  'Content-Type': 'text/plain' });

This method must only be called once on a message and it must be called before response.end() is called.

这个方法只能在消息到来后使用一次 而且这必须在response.end() 之后调用。

If you call response.write() or response.end() before calling this, the implicit/mutable headers will be calculated and call this function for you.

如果你在调用这之前调用了response.write()或者 response.end() , 就会调用这个函数,并且 不明/容易混淆 的头将会被使用。

Note: that Content-Length is given in bytes not characters. The above example works because the string 'hello world' contains only single byte characters. If the body contains higher coded characters then Buffer.byteLength() should be used to determine the number of bytes in a given encoding. And Node does not check whether Content-Length and the length of the body which has been transmitted are equal or not.

注意:Content-Length 是以字节(byte)计,而不是以字符(character)计。之前的例子奏效的原因是字符串'hello world'只包含了单字节的字符。如果body包含了多字节编码的字符,就应当使用Buffer.byteLength()来确定在多字节字符编码情况下字符串的字节数。需要进一步说明的是Node不检查Content-Lenth属性和已传输的body长度是否吻合。

response.setTimeout(msecs, callback)#

  • msecs Number
  • callback Function

  • msecs Number

  • callback Function

Sets the Socket's timeout value to msecs. If a callback is provided, then it is added as a listener on the 'timeout' event on the response object.

设定套接字的超时时间为msecs。如果提供了回调函数,会将其添加为响应对象的'timeout'事件的监听器。

If no 'timeout' listener is added to the request, the response, or the server, then sockets are destroyed when they time out. If you assign a handler on the request, the response, or the server's 'timeout' events, then it is your responsibility to handle timed out sockets.

如果请求、响应、服务器均未添加'timeout'事件监听,套接字将在超时时被销毁。 如果监听了请求、响应、服务器之一的'timeout'事件,需要自行处理超时的套接字。

response.statusCode#

When using implicit headers (not calling response.writeHead() explicitly), this property controls the status code that will be sent to the client when the headers get flushed.

Example:

示例:

response.statusCode = 404;

After response header was sent to the client, this property indicates the status code which was sent out.

response.setHeader(name, value)#

Sets a single header value for implicit headers. If this header already exists in the to-be-sent headers, its value will be replaced. Use an array of strings here if you need to send multiple headers with the same name.

为默认或者已存在的头设置一条单独的头内容。如果这个头已经存在于 将被送出的头中,将会覆盖原来的内容。如果我想设置更多的头, 就使用一个相同名字的字符串数组

Example:

示例:

response.setHeader("Content-Type", "text/html");

or

或者

response.setHeader("Set-Cookie", ["type=ninja", "language=javascript"]);

response.headersSent#

Boolean (read-only). True if headers were sent, false otherwise.

Boolean值(只读).如果headers发送完毕,则为true,反之为false

response.sendDate#

When true, the Date header will be automatically generated and sent in the response if it is not already present in the headers. Defaults to true.

若为true,则当headers里没有Date值时自动生成Date并发送.默认值为true

This should only be disabled for testing; HTTP requires the Date header in responses.

只有在测试环境才禁用它; 因为 HTTP 要求响应包含 Date 头.

response.getHeader(name)#

Reads out a header that's already been queued but not sent to the client. Note that the name is case insensitive. This can only be called before headers get implicitly flushed.

Example:

示例:

var contentType = response.getHeader('content-type');

response.removeHeader(name)#

Removes a header that's queued for implicit sending.

Example:

示例:

response.removeHeader("Content-Encoding");

response.write(chunk, [encoding])#

If this method is called and response.writeHead() has not been called, it will switch to implicit header mode and flush the implicit headers.

This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.

chunk can be a string or a buffer. If chunk is a string, the second parameter specifies how to encode it into a byte stream. By default the encoding is 'utf8'.

Note: This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.

The first time response.write() is called, it will send the buffered header information and the first body to the client. The second time response.write() is called, Node assumes you're going to be streaming data, and sends that separately. That is, the response is buffered up to the first chunk of body.

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is again free.

如果所有数据被成功刷新到内核缓冲区,则返回true。如果所有或部分数据在用户内存里还处于队列中,则返回false。当缓冲区再次被释放时,'drain'事件会被分发。 'drain' will be emitted when the buffer is again free.

response.addTrailers(headers)#

This method adds HTTP trailing headers (a header but at the end of the message) to the response.

Trailers will only be emitted if chunked encoding is used for the response; if it is not (e.g., if the request was HTTP/1.0), they will be silently discarded.

Note that HTTP requires the Trailer header to be sent if you intend to emit trailers, with a list of the header fields in its value. E.g.,

response.writeHead(200, { 'Content-Type': 'text/plain',
                          'Trailer': 'Content-MD5' });
response.write(fileData);
response.addTrailers({'Content-MD5': "7895bf4b8828b55ceaf47747b4bca667"});
response.end();

response.end([data], [encoding])#

This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.

If data is specified, it is equivalent to calling response.write(data, encoding) followed by response.end().

http.request(options, callback)#

Node maintains several connections per server to make HTTP requests. This function allows one to transparently issue requests.

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

options 可以是一个对象或一个字符串。如果 options是一个字符串, 它将自动的使用url.parse()解析。

Options:

Options:

  • host: A domain name or IP address of the server to issue the request to. Defaults to 'localhost'.
  • hostname: To support url.parse() hostname is preferred over host
  • port: Port of remote server. Defaults to 80.
  • localAddress: Local interface to bind for network connections.
  • socketPath: Unix Domain Socket (use one of host:port or socketPath)
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'. An exception is thrown when the request path contains illegal characters. Currently, only spaces are rejected but that may change in the future.
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:
    • undefined (default): use global Agent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.
  • keepAlive: {Boolean} Keep sockets around in a pool to be used by other requests in the future. Default = false
  • keepAliveMsecs: {Integer} When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive. Default = 1000. Only relevant if keepAlive is set to true.

  • host: 要发送请求的服务端域名或IP地址。 默认为'localhost'

  • hostname: 要支持url.parse()的话,优先使用hostname而不是host
  • port: 远程服务器的端口。默认为80。
  • localAddress: 本地接口,用来绑定网络连接。
  • socketPath: Unix Domain Socket (use one of host:port or socketPath)
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'. An exception is thrown when the request path contains illegal characters. Currently, only spaces are rejected but that may change in the future.
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:
    • undefined (default): use global Agent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.
  • keepAlive: {Boolean} Keep sockets around in a pool to be used by other requests in the future. Default = false
  • keepAliveMsecs: {Integer} When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive. Default = 1000. Only relevant if keepAlive is set to true.

http.request() returns an instance of the http.ClientRequest class. The ClientRequest instance is a writable stream. If one needs to upload a file with a POST request, then write to the ClientRequest object.

http.request() 返回一个 http.ClientRequest类的实例。ClientRequest实例是一个可写流对象。如果需要用POST请求上传一个文件的话,就将其写入到ClientRequest对象。

Example:

示例:

// write data to request body
req.write('data\n');
req.write('data\n');
req.end();

Note that in the example req.end() was called. With http.request() one must always call req.end() to signify that you're done with the request - even if there is no data being written to the request body.

注意,例子里的req.end()被调用了。使用http.request()方法时都必须总是调用req.end()以表明这个请求已经完成,即使响应body里没有任何数据。

If any error is encountered during the request (be that with DNS resolution, TCP level errors, or actual HTTP parse errors) an 'error' event is emitted on the returned request object.

There are a few special headers that should be noted.

  • Sending a 'Connection: keep-alive' will notify Node that the connection to the server should be persisted until the next request.

  • Sending a 'Content-length' header will disable the default chunked encoding.

  • 发送 'Content-length' 头将会禁用默认的 chunked 编码.

  • Sending an 'Expect' header will immediately send the request headers. Usually, when sending 'Expect: 100-continue', you should both set a timeout and listen for the continue event. See RFC2616 Section 8.2.3 for more information.

  • Sending an Authorization header will override using the auth option to compute basic authentication.

http.get(options, callback)#

Since most requests are GET requests without bodies, Node provides this convenience method. The only difference between this method and http.request() is that it sets the method to GET and calls req.end() automatically.

Example:

示例:

http.get("http://www.google.com/index.html", function(res) {
  console.log("Got response: " + res.statusCode);
}).on('error', function(e) {
  console.log("Got error: " + e.message);
});

Class: http.Agent#

The HTTP Agent is used for pooling sockets used in HTTP client requests.

The HTTP Agent also defaults client requests to using Connection:keep-alive. If no pending HTTP requests are waiting on a socket to become free the socket is closed. This means that Node's pool has the benefit of keep-alive when under load but still does not require developers to manually close the HTTP clients using KeepAlive.

If you opt into using HTTP KeepAlive, you can create an Agent object with that flag set to true. (See the constructor options below.) Then, the Agent will keep unused sockets in a pool for later use. They will be explicitly marked so as to not keep the Node process running. However, it is still a good idea to explicitly destroy() KeepAlive agents when they are no longer in use, so that the Sockets will be shut down.

Sockets are removed from the agent's pool when the socket emits either a "close" event or a special "agentRemove" event. This means that if you intend to keep one HTTP request open for a long time and don't want it to stay in the pool you can do something along the lines of:

http.get(options, function(res) {
  // Do stuff
}).on("socket", function (socket) {
  socket.emit("agentRemove");
});

Alternatively, you could just opt out of pooling entirely using agent:false:

http.get({
  hostname: 'localhost',
  port: 80,
  path: '/',
  agent: false  // create a new agent just for this one request
}, function (res) {
  // Do stuff with response
})

new Agent([options])#

  • options Object Set of configurable options to set on the agent. Can have the following fields:
    • keepAlive Boolean Keep sockets around in a pool to be used by other requests in the future. Default = false
    • keepAliveMsecs Integer When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive. Default = 1000. Only relevant if keepAlive is set to true.
    • maxSockets Number Maximum number of sockets to allow per host. Default = Infinity.
    • maxFreeSockets Number Maximum number of sockets to leave open in a free state. Only relevant if keepAlive is set to true. Default = 256.

The default http.globalAgent that is used by http.request has all of these values set to their respective defaults.

To configure any of them, you must create your own Agent object.

要配置这些值,你必须创建一个你自己的Agent对象。

var http = require('http');
var keepAliveAgent = new http.Agent({ keepAlive: true });
keepAliveAgent.request(options, onResponseCallback);

agent.maxSockets#

By default set to Infinity. Determines how many concurrent sockets the agent can have open per host.

agent.maxFreeSockets#

By default set to 256. For Agents supporting HTTP KeepAlive, this sets the maximum number of sockets that will be left open in the free state.

agent.sockets#

An object which contains arrays of sockets currently in use by the Agent. Do not modify.

agent.freeSockets#

An object which contains arrays of sockets currently awaiting use by the Agent when HTTP KeepAlive is used. Do not modify.

agent.requests#

An object which contains queues of requests that have not yet been assigned to sockets. Do not modify.

agent.destroy()#

Destroy any sockets that are currently in use by the agent.

销毁被此 agent 正在使用着的所有 sockets.

It is usually not necessary to do this. However, if you are using an agent with KeepAlive enabled, then it is best to explicitly shut down the agent when you know that it will no longer be used. Otherwise, sockets may hang open for quite a long time before the server terminates them.

agent.getName(options)#

Get a unique name for a set of request options, to determine whether a connection can be reused. In the http agent, this returns host:port:localAddress. In the https agent, the name includes the CA, cert, ciphers, and other HTTPS/TLS-specific options that determine socket reusability.

http.globalAgent#

Global instance of Agent which is used as the default for all http client requests.

Class: http.ClientRequest#

This object is created internally and returned from http.request(). It represents an in-progress request whose header has already been queued. The header is still mutable using the setHeader(name, value), getHeader(name), removeHeader(name) API. The actual header will be sent along with the first data chunk or when closing the connection.

To get the response, add a listener for 'response' to the request object. 'response' will be emitted from the request object when the response headers have been received. The 'response' event is executed with one argument which is an instance of http.IncomingMessage.

During the 'response' event, one can add listeners to the response object; particularly to listen for the 'data' event.

If no 'response' handler is added, then the response will be entirely discarded. However, if you add a 'response' event handler, then you must consume the data from the response object, either by calling response.read() whenever there is a 'readable' event, or by adding a 'data' handler, or by calling the .resume() method. Until the data is consumed, the 'end' event will not fire. Also, until the data is read it will consume memory that can eventually lead to a 'process out of memory' error.

Note: Node does not check whether Content-Length and the length of the body which has been transmitted are equal or not.

The request implements the Writable Stream interface. This is an EventEmitter with the following events:

Event 'response'#

function (response) { }

function (response) { }

Emitted when a response is received to this request. This event is emitted only once. The response argument will be an instance of http.IncomingMessage.

Options:

Options:

  • host: A domain name or IP address of the server to issue the request to.
  • port: Port of remote server.
  • socketPath: Unix Domain Socket (use one of host:port or socketPath)

  • host: 请求要发送的域名或服务器的IP地址。

  • port: 远程服务器的端口。
  • socketPath: Unix Domain Socket (使用host:port或socketPath)

Event: 'socket'#

function (socket) { }

function (socket) { }

Emitted after a socket is assigned to this request.

当一个套接字被分配到这个请求之后,该事件被分发。

事件: 'connect'#

function (response, socket, head) { }

function (response, socket, head) { }

Emitted each time a server responds to a request with a CONNECT method. If this event isn't being listened for, clients receiving a CONNECT method will have their connections closed.

A client server pair that show you how to listen for the connect event.

    // make a request over an HTTP tunnel
    socket.write('GET / HTTP/1.1\r\n' +
                 'Host: www.google.com:80\r\n' +
                 'Connection: close\r\n' +
                 '\r\n');
    socket.on('data', function(chunk) {
      console.log(chunk.toString());
    });
    socket.on('end', function() {
      proxy.close();
    });
  });
});

Event: 'upgrade'#

function (response, socket, head) { }

function (response, socket, head) { }

Emitted each time a server responds to a request with an upgrade. If this event isn't being listened for, clients receiving an upgrade header will have their connections closed.

A client server pair that show you how to listen for the upgrade event.

  req.on('upgrade', function(res, socket, upgradeHead) {
    console.log('got upgraded!');
    socket.end();
    process.exit(0);
  });
});

Event: 'continue'#

function () { }

function () { }

Emitted when the server sends a '100 Continue' HTTP response, usually because the request contained 'Expect: 100-continue'. This is an instruction that the client should send the request body.

request.write(chunk, [encoding])#

Sends a chunk of the body. By calling this method many times, the user can stream a request body to a server--in that case it is suggested to use the ['Transfer-Encoding', 'chunked'] header line when creating the request.

The chunk argument should be a Buffer or a string.

chunk 参数必须是 Buffer 或者 string.

The encoding argument is optional and only applies when chunk is a string. Defaults to 'utf8'.

encoding 参数是可选的, 并且只能在 chunk 是 string 类型的时候才能设置. 默认是 'utf8'.

request.end([data], [encoding])#

Finishes sending the request. If any parts of the body are unsent, it will flush them to the stream. If the request is chunked, this will send the terminating '0\r\n\r\n'.

If data is specified, it is equivalent to calling request.write(data, encoding) followed by request.end().

request.abort()#

Aborts a request. (New since v0.3.8.)

终止一个请求. (从 v0.3.8 开始新加.)

request.setTimeout(timeout, [callback])#

Once a socket is assigned to this request and is connected socket.setTimeout() will be called.

request.setNoDelay([noDelay])#

Once a socket is assigned to this request and is connected socket.setNoDelay() will be called.

request.setSocketKeepAlive([enable], [initialDelay])#

Once a socket is assigned to this request and is connected socket.setKeepAlive() will be called.

一旦一个套接字被分配到这个请求,而且成功连接,那么socket.setKeepAlive()就会被调用。

http.IncomingMessage#

An IncomingMessage object is created by http.Server or http.ClientRequest and passed as the first argument to the 'request' and 'response' event respectively. It may be used to access response status, headers and data.

一个 IncomingMessage对象是由 http.Serverhttp.ClientRequest创建的,并作为第一参数分别传递给'request''response' 事件。它也可以被用来访问应答的状态,头文件和数据。

It implements the Readable Stream interface, as well as the following additional events, methods, and properties.

这个实现了 [可读流][]接口以及以下增加的事件,函数和属性。

事件: 'close'#

function () { }

function () { }

Indicates that the underlaying connection was terminated before response.end() was called or able to flush.

表示在response.end()被调用或强制刷新之前,底层的连接已经被终止了。

Just like 'end', this event occurs only once per response. See [http.ServerResponse][]'s 'close' event for more information.

'end'一样,这个事件对于每个应答只会触发一次。详见[http.ServerResponse][]的 'close'事件。

message.httpVersion#

In case of server request, the HTTP version sent by the client. In the case of client response, the HTTP version of the connected-to server. Probably either '1.1' or '1.0'.

客户端向服务器发出请求时,客户端发送的HTTP版本;或是服务器向客户端返回应答时,服务器的HTTP版本。通常是 '1.1''1.0'

Also response.httpVersionMajor is the first integer and response.httpVersionMinor is the second.

message.headers#

The request/response headers object.

请求/响应 头对象.

Read only map of header names and values. Header names are lower-cased. Example:

只读的头文件名称和值的映射。头文件名称全小写。示例:

// 输出类似这样:
//
// { 'user-agent': 'curl/7.22.0',
//   host: '127.0.0.1:8000',
//   accept: '*/*' }
console.log(request.headers);

message.rawHeaders#

The raw request/response headers list exactly as they were received.

Note that the keys and values are in the same list. It is not a list of tuples. So, the even-numbered offsets are key values, and the odd-numbered offsets are the associated values.

Header names are not lowercased, and duplicates are not merged.

// Prints something like:
//
// [ 'user-agent',
//   'this is invalid because there can be only one',
//   'User-Agent',
//   'curl/7.22.0',
//   'Host',
//   '127.0.0.1:8000',
//   'ACCEPT',
//   '*/*' ]
console.log(request.rawHeaders);

message.trailers#

The request/response trailers object. Only populated at the 'end' event.

message.rawTrailers#

The raw request/response trailer keys and values exactly as they were received. Only populated at the 'end' event.

message.setTimeout(msecs, callback)#

  • msecs Number
  • callback Function

  • msecs Number

  • callback Function

Calls message.connection.setTimeout(msecs, callback).

调用message.connection.setTimeout(msecs, callback)

message.method#

Only valid for request obtained from http.Server.

仅对从http.Server获得到的请求(request)有效.

The request method as a string. Read only. Example: 'GET', 'DELETE'.

请求(request)方法如同一个只读的字符串,比如‘GET’、‘DELETE’。

message.url#

Only valid for request obtained from http.Server.

仅对从http.Server获得到的请求(request)有效.

Request URL string. This contains only the URL that is present in the actual HTTP request. If the request is:

请求的URL字符串.它仅包含实际HTTP请求中所提供的URL.加入请求如下:

GET /status?name=ryan HTTP/1.1\r\n
Accept: text/plain\r\n
\r\n

Then request.url will be:

request.url 为:

'/status?name=ryan'

If you would like to parse the URL into its parts, you can use require('url').parse(request.url). Example:

如果你想要将URL分解出来,你可以用require('url').parse(request.url). 例如:

node> require('url').parse('/status?name=ryan')
{ href: '/status?name=ryan',
  search: '?name=ryan',
  query: 'name=ryan',
  pathname: '/status' }

If you would like to extract the params from the query string, you can use the require('querystring').parse function, or pass true as the second argument to require('url').parse. Example:

如果你想要提取出从请求字符串(query string)中的参数,你可以用require('querystring').parse函数, 或者将true作为第二个参数传递给require('url').parse. 例如:

node> require('url').parse('/status?name=ryan', true)
{ href: '/status?name=ryan',
  search: '?name=ryan',
  query: { name: 'ryan' },
  pathname: '/status' }

message.statusCode#

Only valid for response obtained from http.ClientRequest.

仅对从http.ClientRequest获得的响应(response)有效.

The 3-digit HTTP response status code. E.G. 404.

三位数的HTTP响应状态码. 例如 404.

message.socket#

The net.Socket object associated with the connection.

与此连接(connection)关联的net.Socket对象.

With HTTPS support, use request.connection.verifyPeer() and request.connection.getPeerCertificate() to obtain the client's authentication details.

通过https的支持,使用 request.connection.verifyPeer()方法和request.connection.getPeerCertificate()方法来得到客户端的身份信息。

HTTPS#

稳定度: 3 - 稳定

HTTPS is the HTTP protocol over TLS/SSL. In Node this is implemented as a separate module.

HTTPS 是建立在 TLS/SSL 之上的 HTTP 协议。在 Node 中被实现为单独的模块。

类: https.Server#

This class is a subclass of tls.Server and emits events same as http.Server. See http.Server for more information.

该类是 tls.Server 的子类,并且发生和 http.Server 一样的事件。更多信息详见 http.Server

server.setTimeout(msecs, callback)#

See http.Server#setTimeout().

详见 http.Server#setTimeout()

server.timeout#

See http.Server#timeout.

详见 http.Server#timeout

https.createServer(options, [requestListener])#

Returns a new HTTPS web server object. The options is similar to tls.createServer(). The requestListener is a function which is automatically added to the 'request' event.

返回一个新的 HTTPS Web 服务器对象。其中 options 类似于 tls.createServer()requestListener 是一个会被自动添加到 request 事件的函数。

Example:

示例:

https.createServer(options, function (req, res) {
  res.writeHead(200);
  res.end("hello world\n");
}).listen(8000);

Or

或者

https.createServer(options, function (req, res) {
  res.writeHead(200);
  res.end("hello world\n");
}).listen(8000);

server.listen(port, [host], [backlog], [callback])#

server.listen(path, [callback])#

server.listen(handle, [callback])#

server.listen(port, [host], [backlog], [callback])#

server.listen(path, [callback])#

server.listen(handle, [callback])#

See http.listen() for details.

详见 http.listen()

server.close([callback])#

See http.close() for details.

详见 http.close()

https.request(options, callback)#

Makes a request to a secure web server.

向一个安全 Web 服务器发送请求。

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

options 可以是一个对象或字符串。如果 options 是字符串,它会自动被 url.parse() 解析。

All options from http.request() are valid.

所有来自 http.request() 的选项都是经过验证的。

Example:

示例:

req.on('error', function(e) {
  console.error(e);
});

The options argument has the following options

options 参数有如下选项

  • host: A domain name or IP address of the server to issue the request to. Defaults to 'localhost'.
  • hostname: To support url.parse() hostname is preferred over host
  • port: Port of remote server. Defaults to 443.
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:

    • undefined (default): use globalAgent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.
  • host:发送请求的服务器的域名或 IP 地址,缺省为 'localhost'

  • hostname:为了支持 url.parse()hostname 优先于 host
  • port:远程服务器的端口,缺省为 443。
  • method:指定 HTTP 请求方法的字符串,缺省为 `'GET'。
  • path:请求路径,缺省为 '/'。如有查询字串则应包含,比如 '/index.html?page=12'
  • headers:包含请求头的对象。
  • auth:基本认证,如 'user:password' 来计算 Authorization 头。
  • agent:控制 Agent 行为。当使用 Agent 时请求会缺省为 Connection: keep-alive。可选值有:
    • undefined(缺省):为该主机和端口使用 globalAgent
    • Agent 对象:明确使用传入的 Agent
    • false:不使用 Agent 连接池,缺省请求 Connection: close

The following options from tls.connect() can also be specified. However, a globalAgent silently ignores these.

下列来自 tls.connect() 的选项也能够被指定,但一个 globalAgent 会忽略它们。

  • pfx: Certificate, Private key and CA certificates to use for SSL. Default null.
  • key: Private key to use for SSL. Default null.
  • passphrase: A string of passphrase for the private key or pfx. Default null.
  • cert: Public x509 certificate to use. Default null.
  • ca: An authority certificate or array of authority certificates to check the remote host against.
  • ciphers: A string describing the ciphers to use or exclude. Consult http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT for details on the format.
  • rejectUnauthorized: If true, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails. Verification happens at the connection level, before the HTTP request is sent. Default true.
  • secureProtocol: The SSL method to use, e.g. SSLv3_method to force SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant SSL_METHODS.

  • pfx:证书,SSL 所用的私钥或 CA 证书。缺省为 null

  • key:SSL 所用私钥。缺省为 null
  • passphrase:私钥或 pfx 的口令字符串,缺省为 null
  • cert:所用公有 x509 证书,缺省为 null
  • ca:用于检查远程主机的证书颁发机构或包含一系列证书颁发机构的数组。
  • ciphers:描述要使用或排除的密码的字符串,格式请参阅 http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT
  • rejectUnauthorized:如为 true 则服务器证书会使用所给 CA 列表验证。如果验证失败则会触发 'error' 时间。验证过程发生于连接层,在 HTTP 请求发送之前。缺省为 true
  • secureProtocol:所用 SSL 方法,比如 SSLv3_method 强制使用 SSL version 3。可取值取决于您安装的 OpenSSL 并被定义在 SSL_METHODS 常量。

In order to specify these options, use a custom Agent.

要指定这些选项,使用一个自定义 Agent

Example:

示例:

var req = https.request(options, function(res) {
  ...
}

Or does not use an Agent.

或不使用 Agent

Example:

示例:

var req = https.request(options, function(res) {
  ...
}

https.get(options, callback)#

Like http.get() but for HTTPS.

类似 http.get() 但为 HTTPS。

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

options 可以是一个对象或字符串。如果 options 是字符串,它会自动被 url.parse() 解析。

Example:

示例:

}).on('error', function(e) {
  console.error(e);
});

类: https.Agent#

An Agent object for HTTPS similar to http.Agent. See https.request() for more information.

类似于 http.Agent 的 HTTPS Agent 对象。详见 https.request()

https.globalAgent#

Global instance of https.Agent for all HTTPS client requests.

所有 HTTPS 客户端请求的全局 https.Agent 实例。

URL#

稳定度: 3 - 稳定

This module has utilities for URL resolution and parsing. Call require('url') to use it.

该模块包含用以 URL 解析的实用函数。 使用 require('url') 来调用该模块。

Parsed URL objects have some or all of the following fields, depending on whether or not they exist in the URL string. Any parts that are not in the URL string will not be in the parsed object. Examples are shown for the URL

不同的 URL 字符串解析后返回的对象会有一些额外的字段信息,仅当该部分出现在 URL 中才会有。以下是一个 URL 例子:

'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'

'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'

  • href: The full URL that was originally parsed. Both the protocol and host are lowercased.

  • href: 所解析的完整原始 URL。协议名和主机名都已转为小写。

    例如: 'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'

  • protocol: The request protocol, lowercased.

  • protocol: 请求协议,小写

    例如: 'http:'

  • host: The full lowercased host portion of the URL, including port information.

  • host: URL主机名已全部转换成小写, 包括端口信息

    例如: 'host.com:8080'

  • auth: The authentication information portion of a URL.

  • auth:URL中身份验证信息部分

    例如: 'user:pass'

  • hostname: Just the lowercased hostname portion of the host.

  • hostname:主机的主机名部分, 已转换成小写

    例如: 'host.com'

  • port: The port number portion of the host.

  • port: 主机的端口号部分

    例如: '8080'

  • pathname: The path section of the URL, that comes after the host and before the query, including the initial slash if present.

  • pathname: URL的路径部分,位于主机名之后请求查询之前, including the initial slash if present.

    例如: '/p/a/t/h'

  • search: The 'query string' portion of the URL, including the leading question mark.

  • search: URL 的“查询字符串”部分,包括开头的问号。

    例如: '?query=string'

  • path: Concatenation of pathname and search.

  • path: pathnamesearch 连在一起。

    例如: '/p/a/t/h?query=string'

  • query: Either the 'params' portion of the query string, or a querystring-parsed object.

  • query: 查询字符串中的参数部分(问号后面部分字符串),或者使用 querystring.parse() 解析后返回的对象。

    例如: 'query=string' or {'query':'string'}

  • hash: The 'fragment' portion of the URL including the pound-sign.

  • hash: URL 的 “#” 后面部分(包括 # 符号)

    例如: '#hash'

The following methods are provided by the URL module:

以下是 URL 模块提供的方法:

url.parse(urlStr, [parseQueryString], [slashesDenoteHost])#

Take a URL string, and return an object.

输入 URL 字符串,返回一个对象。

Pass true as the second argument to also parse the query string using the querystring module. Defaults to false.

将第二个参数设置为 true 则使用 querystring 模块来解析 URL 中德查询字符串部分,默认为 false

Pass true as the third argument to treat //foo/bar as { host: 'foo', pathname: '/bar' } rather than { pathname: '//foo/bar' }. Defaults to false.

将第三个参数设置为 true 来把诸如 //foo/bar 这样的URL解析为 { host: 'foo', pathname: '/bar' } 而不是 { pathname: '//foo/bar' }。 默认为 false

url.format(urlObj)#

Take a parsed URL object, and return a formatted URL string.

输入一个 URL 对象,返回格式化后的 URL 字符串。

  • href will be ignored.
  • protocolis treated the same with or without the trailing : (colon).
    • The protocols http, https, ftp, gopher, file will be postfixed with :// (colon-slash-slash).
    • All other protocols mailto, xmpp, aim, sftp, foo, etc will be postfixed with : (colon)
  • auth will be used if present.
  • hostname will only be used if host is absent.
  • port will only be used if host is absent.
  • host will be used in place of hostname and port
  • pathname is treated the same with or without the leading / (slash)
  • search will be used in place of query
  • query (object; see querystring) will only be used if search is absent.
  • search is treated the same with or without the leading ? (question mark)
  • hash is treated the same with or without the leading # (pound sign, anchor)

  • href 属性会被忽略处理.

  • protocol无论是否有末尾的 : (冒号),会同样的处理
    • 这些协议包括 http, https, ftp, gopher, file 后缀是 :// (冒号-斜杠-斜杠).
    • 所有其他的协议如 mailto, xmpp, aim, sftp, foo, 等 会加上后缀 : (冒号)
  • auth 如果有将会出现.
  • hostname 如果 host 属性没被定义,则会使用此属性.
  • port 如果 host 属性没被定义,则会使用此属性.
  • host 优先使用,将会替代 hostnameport
  • pathname 将会同样处理无论结尾是否有/ (斜杠)
  • search 将会替代 query属性
  • query (object类型; 详细请看 querystring) 如果没有 search,将会使用此属性.
  • search 无论前面是否有 ? (问号),都会同样的处理
  • hash无论前面是否有# (井号, 锚点),都会同样处理

url.resolve(from, to)#

Take a base URL, and a href URL, and resolve them as a browser would for an anchor tag. Examples:

给定一个基础URL路径,和一个href URL路径,并且象浏览器那样处理他们可以带上锚点。 例子:

url.resolve('/one/two/three', 'four')         // '/one/two/four'
url.resolve('http://example.com/', '/one')    // 'http://example.com/one'
url.resolve('http://example.com/one', '/two') // 'http://example.com/two'


url.resolve('/one/two/three', 'four')         // '/one/two/four'
url.resolve('http://example.com/', '/one')    // 'http://example.com/one'
url.resolve('http://example.com/one', '/two') // 'http://example.com/two'

Query String#

稳定度: 3 - 稳定

This module provides utilities for dealing with query strings. It provides the following methods:

这个模块提供一些处理 query string 的工具。它提供下列方法:

querystring.stringify(obj, [sep], [eq])#

Serialize an object to a query string. Optionally override the default separator ('&') and assignment ('=') characters.

序列化一个对象到一个 query string。可以选择是否覆盖默认的分割符('&')和分配符('=')。

Example:

示例:

querystring.stringify({foo: 'bar', baz: 'qux'}, ';', ':')
// 返回如下字串
'foo:bar;baz:qux'

querystring.parse(str, [sep], [eq], [options])#

Deserialize a query string to an object. Optionally override the default separator ('&') and assignment ('=') characters.

将一个 query string 反序列化为一个对象。可以选择是否覆盖默认的分割符('&')和分配符('=')。

Options object may contain maxKeys property (equal to 1000 by default), it'll be used to limit processed keys. Set it to 0 to remove key count limitation.

options对象可能包含maxKeys属性(默认为1000),它可以用来限制处理过的键(key)的数量.设为0可以去除键(key)的数量限制.

Example:

示例:

querystring.parse('foo=bar&baz=qux&baz=quux&corge')
// returns
{ foo: 'bar', baz: ['qux', 'quux'], corge: '' }

querystring.escape#

The escape function used by querystring.stringify, provided so that it could be overridden if necessary.

querystring.stringify 使用的转意函数,在必要的时候可被重写。

querystring.unescape#

The unescape function used by querystring.parse, provided so that it could be overridden if necessary.

querystring.parse 使用的反转意函数,在必要的时候可被重写。

punycode#

稳定度: 2 - 不稳定

Punycode.js is bundled with Node.js v0.6.2+. Use require('punycode') to access it. (To use it with other Node.js versions, use npm to install the punycode module first.)

Punycode.js 自 Node.js v0.6.2+ 开始被内置,通过 require('punycode') 引入。(要在其它 Node.js 版本中使用它,请先使用 npm 安装 punycode 模块。)

punycode.decode(string)#

Converts a Punycode string of ASCII-only symbols to a string of Unicode symbols.

将一个纯 ASCII 符号的 Punycode 字符串转换为 Unicode 符号的字符串。

// 解码域名部分
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'

punycode.encode(string)#

Converts a string of Unicode symbols to a Punycode string of ASCII-only symbols.

将一个 Unicode 符号的字符串转换为纯 ASCII 符号的 Punycode 字符串。

// 编码域名部分
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'

punycode.toUnicode(domain)#

Converts a Punycode string representing a domain name to Unicode. Only the Punycoded parts of the domain name will be converted, i.e. it doesn't matter if you call it on a string that has already been converted to Unicode.

将一个表示域名的 Punycode 字符串转换为 Unicode。只有域名中的 Punycode 部分会转换,也就是说您在一个已经转换为 Unicode 的字符串上调用它也是没问题的。

// 解码域名
punycode.toUnicode('xn--maana-pta.com'); // 'mañana.com'
punycode.toUnicode('xn----dqo34k.com'); // '☃-⌘.com'

punycode.toASCII(domain)#

Converts a Unicode string representing a domain name to Punycode. Only the non-ASCII parts of the domain name will be converted, i.e. it doesn't matter if you call it with a domain that's already in ASCII.

将一个表示域名的 Unicode 字符串转换为 Punycode。只有域名的非 ASCII 部分会被转换,也就是说您在一个已经是 ASCII 的域名上调用它也是没问题的。

// 编码域名
punycode.toASCII('mañana.com'); // 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com'); // 'xn----dqo34k.com'

punycode.ucs2#

punycode.ucs2.decode(string)#

Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.

创建一个数组,包含字符串中每个 Unicode 符号的数字编码点。由于 JavaScript 在内部使用 UCS-2, 该函数会按照 UTF-16 将一对代半数(UCS-2 暴露的单独的字符)转换为单独一个编码点。

punycode.ucs2.decode('abc'); // [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 tetragram for centre:
punycode.ucs2.decode('\uD834\uDF06'); // [0x1D306]

punycode.ucs2.encode(codePoints)#

Creates a string based on an array of numeric code point values.

以数字编码点的值的数组创建一个字符串。

punycode.ucs2.encode([0x61, 0x62, 0x63]); // 'abc'
punycode.ucs2.encode([0x1D306]); // '\uD834\uDF06'

punycode.version#

A string representing the current Punycode.js version number.

表示当前 Punycode.js 版本号的字符串。

Readline#

稳定度: 2 - 不稳定

To use this module, do require('readline'). Readline allows reading of a stream (such as process.stdin) on a line-by-line basis.

要使用此模块,需要require('readline').Readline程序允许逐行读取一个流内容(例如process.stdin).

Note that once you've invoked this module, your node program will not terminate until you've closed the interface. Here's how to allow your program to gracefully exit:

需要注意的是你一旦调用了这个模块,你的node程序将不会终止直到你关闭此接口。下面是如何让你的程序正常退出的方法:

  rl.close();
});

readline.createInterface(options)#

Creates a readline Interface instance. Accepts an "options" Object that takes the following values:

创建一个readline的接口实例. 接受一个Object类型参数,可传递以下几个值:

  • input - the readable stream to listen to (Required).

  • input - 要监听的可读流 (必需).

  • output - the writable stream to write readline data to (Required).

  • output - 要写入 readline 的可写流 (必须).

  • completer - an optional function that is used for Tab autocompletion. See below for an example of using this.

  • completer - 用于 Tab 自动补全的可选函数。见下面使用的例子。

  • terminal - pass true if the input and output streams should be treated like a TTY, and have ANSI/VT100 escape codes written to it. Defaults to checking isTTY on the output stream upon instantiation.

  • terminal - 如果希望 inputoutput 流像 TTY 一样对待,那么传递参数 true ,并且经由 ANSI/VT100 转码。 默认情况下检查 isTTY 是否在 output 流上实例化。

The completer function is given a the current line entered by the user, and is supposed to return an Array with 2 entries:

通过用户 completer 函数给定了一个当前行入口,并且期望返回一个包含两个条目的数组:

  1. An Array with matching entries for the completion.

  2. 一个匹配当前输入补全的字符串数组.

  3. The substring that was used for the matching.

  4. 一个用于匹配的子字符串。

Which ends up looking something like: [[substr1, substr2, ...], originalsubstring].

最终像这种形式: [[substr1, substr2, ...], originalsubstring].

Example:

示例:

function completer(line) {
  var completions = '.help .error .exit .quit .q'.split(' ')
  var hits = completions.filter(function(c) { return c.indexOf(line) == 0 })
  // show all completions if none found
  return [hits.length ? hits : completions, line]
}

Also completer can be run in async mode if it accepts two arguments:

completer 也可以运行在异步模式下,此时接受两个参数:

function completer(linePartial, callback) {
  callback(null, [['123'], linePartial]);
}

createInterface is commonly used with process.stdin and process.stdout in order to accept user input:

为了接受用户的输入,createInterface 通常跟 process.stdinprocess.stdout 一块使用:

var readline = require('readline');
var rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

Once you have a readline instance, you most commonly listen for the "line" event.

一旦你有一个 readline 实例,你通常会监听 "line" 事件。

If terminal is true for this instance then the output stream will get the best compatibility if it defines an output.columns property, and fires a "resize" event on the output if/when the columns ever change (process.stdout does this automatically when it is a TTY).

如果这个实例中terminaltrue,而且output流定义了一个output.columns属性,那么output流将获得最好的兼容性,并且,当columns变化时(当它是TTY时,process.stdout会自动这样做),会在output上触发一个 "resize"事件。

类: 接口#

The class that represents a readline interface with an input and output stream.

代表一个有输入输出流的 readline 接口的类。

rl.setPrompt(prompt)#

Sets the prompt, for example when you run node on the command line, you see > , which is node's prompt.

设置提示符,例如当你在命令行运行 node 时,你会看到 > ,这就是 node 的提示符。

rl.prompt([preserveCursor])#

Readies readline for input from the user, putting the current setPrompt options on a new line, giving the user a new spot to write. Set preserveCursor to true to prevent the cursor placement being reset to 0.

为用户输入准备好readline,将现有的setPrompt选项放到新的一行,让用户有一个新的地方开始输入。将preserveCursor设为true来防止光标位置被重新设定成0

This will also resume the input stream used with createInterface if it has been paused.

如果暂停,也会使用 createInterface 重置 input 流。

rl.question(query, callback)#

Prepends the prompt with query and invokes callback with the user's response. Displays the query to the user, and then invokes callback with the user's response after it has been typed.

预处理 query提示 ,用户应答时调用 callback . 当类型被确定后,将查询结果显示给用户, 然后在用户应答时调用 callback.

This will also resume the input stream used with createInterface if it has been paused.

如果暂停,也会使用 createInterface 重置 input 流。

Example usage:

使用示例:

interface.question('What is your favorite food?', function(answer) {
  console.log('Oh, so your favorite food is ' + answer);
});

rl.pause()#

Pauses the readline input stream, allowing it to be resumed later if needed.

暂停 readline 的输入流 (input stream), 如果有需要稍后还可以恢复。

rl.resume()#

Resumes the readline input stream.

恢复 readline 的输入流 (input stream).

rl.close()#

Closes the Interface instance, relinquishing control on the input and output streams. The "close" event will also be emitted.

关闭接口实例 (Interface instance), 放弃控制输入输出流。"close" 事件会被触发。

rl.write(data, [key])#

Writes data to output stream. key is an object literal to represent a key sequence; available if the terminal is a TTY.

data 写入到 output 流。key 是一个代表键序列的对象;当终端是一个 TTY 时可用。

This will also resume the input stream if it has been paused.

如果暂停,也会重置 input 流。

Example:

示例:

rl.write('Delete me!');
// 模仿 ctrl+u快捷键,删除之前所写行 
rl.write(null, {ctrl: true, name: 'u'});

Events#

Event: 'line'#

function (line) {}

function (line) {}

Emitted whenever the input stream receives a \n, usually received when the user hits enter, or return. This is a good hook to listen for user input.

input 流接受了一个 \n 时触发,通常在用户敲击回车或者返回时接收。 这是一个监听用户输入的利器。

Example of listening for line:

监听 line 事件的示例:

rl.on('line', function (cmd) {
  console.log('You just typed: '+cmd);
});

事件: 'pause'#

function () {}

function () {}

Emitted whenever the input stream is paused.

不论何时,只要输入流被暂停就会触发。

Also emitted whenever the input stream is not paused and receives the SIGCONT event. (See events SIGTSTP and SIGCONT)

而在输入流未被暂停,但收到 SIGCONT 信号时也会触发。 (详见 SIGTSTPSIGCONT 事件)

Example of listening for pause:

监听 pause 事件的示例:

rl.on('pause', function() {
  console.log('Readline 输入暂停.');
});

事件: 'resume'#

function () {}

function () {}

Emitted whenever the input stream is resumed.

不论何时,只要输入流重新启用就会触发。

Example of listening for resume:

监听 resume 事件的示例:

rl.on('resume', function() {
  console.log('Readline 恢复.');
});

事件: 'close'#

function () {}

function () {}

Emitted when close() is called.

close() 被调用时触发。

Also emitted when the input stream receives its "end" event. The Interface instance should be considered "finished" once this is emitted. For example, when the input stream receives ^D, respectively known as EOT.

input流接收到"结束"事件时也会被触发. 一旦触发,应当认为Interface实例 "结束" . 例如, 当input流接收到^D时, 分别被认为EOT.

This event is also called if there is no SIGINT event listener present when the input stream receives a ^C, respectively known as SIGINT.

input 流接收到一个 ^C 时,即使没有 SIGINT 监听器,也会触发这个事件,分别被称为 SIGINT

Event: 'SIGINT'#

function () {}

function () {}

Emitted whenever the input stream receives a ^C, respectively known as SIGINT. If there is no SIGINT event listener present when the input stream receives a SIGINT, pause will be triggered.

只要 input流 接收到^C就会被触发, 分别被认为SIGINT.当input流接收到SIGINT时, 如果没有 SIGINT 事件监听器,pause 将会被触发.

Example of listening for SIGINT:

监听 SIGINT 信号的示例:

rl.on('SIGINT', function() {
  rl.question('Are you sure you want to exit?', function(answer) {
    if (answer.match(/^y(es)?$/i)) rl.pause();
  });
});

Event: 'SIGTSTP'#

function () {}

function () {}

This does not work on Windows.

该功能不支持 windows 操作系统

Emitted whenever the input stream receives a ^Z, respectively known as SIGTSTP. If there is no SIGTSTP event listener present when the input stream receives a SIGTSTP, the program will be sent to the background.

只要input流接收到^Z时就被触发, 分别被认为SIGTSTP. 当input流接收到 SIGTSTP时,如果没有SIGTSTP 事件监听器 ,程序会被发送到后台 .

When the program is resumed with fg, the pause and SIGCONT events will be emitted. You can use either to resume the stream.

当程序使用参数 fg 重启,pauseSIGCONT 事件将会被触发。 你可以使用两者中任一事件来恢复流。

The pause and SIGCONT events will not be triggered if the stream was paused before the program was sent to the background.

在程序被发送到后台之前,如果流暂停,pauseSIGCONT 事件将不会被触发。

Example of listening for SIGTSTP:

监听 SIGTSTP 的示例:

rl.on('SIGTSTP', function() {
  // 这将重载 SIGTSTP并防止程序转到
  // 后台.
  console.log('Caught SIGTSTP.');
});

Event: 'SIGCONT'#

function () {}

function () {}

This does not work on Windows.

该功能不支持 windows 操作系统

Emitted whenever the input stream is sent to the background with ^Z, respectively known as SIGTSTP, and then continued with fg(1). This event only emits if the stream was not paused before sending the program to the background.

一旦 input流中含有 ^Z并被发送到后台就会触发,分别被认为 SIGTSTP, 然后继续执行fg(1). 这一事件只有在流被发送后台之前没有暂停才会触发.

Example of listening for SIGCONT:

监听 SIGCONT 的示例:

rl.on('SIGCONT', function() {
  // `prompt` 将会自动恢复流
  rl.prompt();
});

示例: Tiny CLI#

Here's an example of how to use all these together to craft a tiny command line interface:

这里有一个使用所有方法精心设计的小命令行程序:

rl.on('line', function(line) {
  switch(line.trim()) {
    case 'hello':
      console.log('world!');
      break;
    default:
      console.log('Say what? I might have heard `' + line.trim() + '`');
      break;
  }
  rl.prompt();
}).on('close', function() {
  console.log('Have a great day!');
  process.exit(0);
});

REPL#

稳定度: 3 - 稳定

A Read-Eval-Print-Loop (REPL) is available both as a standalone program and easily includable in other programs. The REPL provides a way to interactively run JavaScript and see the results. It can be used for debugging, testing, or just trying things out.

一个 Read-Eval-Print-Loop(REPL,读取-执行-输出循环)既可用于独立程序也可很容易地被集成到其它程序中。REPL 提供了一种交互地执行 JavaScript 并查看输出的方式。它可以被用作调试、测试或仅仅尝试某些东西。

By executing node without any arguments from the command-line you will be dropped into the REPL. It has simplistic emacs line-editing.

在命令行中不带任何参数执行 node 您便会进入 REPL。它提供了一个简单的 Emacs 行编辑。

mjr:~$ node
Type '.help' for options.
> a = [ 1, 2, 3];
[ 1, 2, 3 ]
> a.forEach(function (v) {
...   console.log(v);
...   });
1
2
3

For advanced line-editors, start node with the environmental variable NODE_NO_READLINE=1. This will start the main and debugger REPL in canonical terminal settings which will allow you to use with rlwrap.

若想使用高级的编辑模式,设置环境变量 NODE_NO_READLINE=1 后运行 node。这将在允许你在可以使用 rlwrap 的终端上,启动高级的 REPL 模式 (the main and debugger REPL)。

For example, you could add this to your bashrc file:

例如,您可以将下列代码加入到您的 bashrc 文件:

alias node="env NODE_NO_READLINE=1 rlwrap node"

repl.start(options)#

Returns and starts a REPLServer instance. Accepts an "options" Object that takes the following values:

启动并返回一个 REPLServer 实例。接受一个包含如下内容的 "options" 对象:

  • prompt - the prompt and stream for all I/O. Defaults to > .

  • prompt - 所有输入输出的提示符。默认是 > .

  • input - the readable stream to listen to. Defaults to process.stdin.

  • input - 监听的可读流。默认指向标准输入流 process.stdin

  • output - the writable stream to write readline data to. Defaults to process.stdout.

  • output - 用来输出数据的可写流。默认指向标准输出流 process.stdout

  • terminal - pass true if the stream should be treated like a TTY, and have ANSI/VT100 escape codes written to it. Defaults to checking isTTY on the output stream upon instantiation.

  • terminal - 如果 stream 应该被当做 TTY 来对待并且有 ANSI/VT100 转义时,则传 true。 默认使用 output 实例的 isTTY来检查。

  • eval - function that will be used to eval each given line. Defaults to an async wrapper for eval(). See below for an example of a custom eval.

  • eval - 用来对每一行进行求值的函数。 默认为eval()的一个异步包装函数。下面给出一个自定义eval的例子。

  • useColors - a boolean which specifies whether or not the writer function should output colors. If a different writer function is set then this does nothing. Defaults to the repl's terminal value.

  • useColors - 一个布尔值,表明了writer函数是否会输出颜色。如果设定了一个不同的writer函数,那么这不会产生任何影响。默认为repl的terminal值。

  • useGlobal - if set to true, then the repl will use the global object, instead of running scripts in a separate context. Defaults to false.

  • useGlobal - 如果设定为true,那么repl就会使用global对象而不是在一个独立环境里运行脚本。默认为false

  • ignoreUndefined - if set to true, then the repl will not output the return value of command if it's undefined. Defaults to false.

  • ignoreUndefined - 如果设定为true,那么repl将不会输出未定义命令的返回值。默认为false

  • writer - the function to invoke for each command that gets evaluated which returns the formatting (including coloring) to display. Defaults to util.inspect.

  • writer - 每一个命令被求值时都会调用此函数,而该函数会返回显示的格式(包括颜色)。默认为util.inspectutil.inspect.

You can use your own eval function if it has following signature:

你可以使用你自己的eval函数,只有它有如下的签名:

function eval(cmd, context, filename, callback) {
  callback(null, result);
}

Multiple REPLs may be started against the same running instance of node. Each will share the same global object but will have unique I/O.

多个REPL可以在同一个运行的节点实例上打开。它们共享同一个global对象,但分别有各自的I/O。

Here is an example that starts a REPL on stdin, a Unix socket, and a TCP socket:

以下是通过标准输入流(stdin)、Unix socket 以及 TCP socket 三种情况来启动 REPL 的例子:

net.createServer(function (socket) {
  connections += 1;
  repl.start({
    prompt: "node via TCP socket> ",
    input: socket,
    output: socket
  }).on('exit', function() {
    socket.end();
  });
}).listen(5001);

Running this program from the command line will start a REPL on stdin. Other REPL clients may connect through the Unix socket or TCP socket. telnet is useful for connecting to TCP sockets, and socat can be used to connect to both Unix and TCP sockets.

从命令行运行该程序,将会从标准输入流启动 REPL 模式。 其他的 REPL 客户端也可以通过 Unix socket 或者 TCP socket 连接。 telnet 常用于连接 TCP sockets,而 socat 则可以同时用来连接 Unix 和 TCP sockets。

By starting a REPL from a Unix socket-based server instead of stdin, you can connect to a long-running node process without restarting it.

通过从一个Unix的套接字服务器而不是stdin来启动REPL, 你可以连接到一个长久运行的node进程而不不需要重启。

For an example of running a "full-featured" (terminal) REPL over a net.Server and net.Socket instance, see: https://gist.github.com/2209310

一个在net.Servernet.Socket实例上运行的"全功能"(terminal)REPL的例子可以查看这里: https://gist.github.com/2209310

For an example of running a REPL instance over curl(1), see: https://gist.github.com/2053342

一个在curl(1)上运行的REPL实例的例子可以查看这里: https://gist.github.com/2053342

事件: 'exit'#

function () {}

function () {}

Emitted when the user exits the REPL in any of the defined ways. Namely, typing .exit at the repl, pressing Ctrl+C twice to signal SIGINT, or pressing Ctrl+D to signal "end" on the input stream.

当用户通过任意预定义的方式退出REPL,该事件被分发。比如,在repl里输入.exit,按Ctrl+C两次来发送SIGINT信号,或者在input流上按Ctrl+D来发送"end"。

Example of listening for exit:

监听 exit 事件的例子:

r.on('exit', function () {
  console.log('从 REPL 得到 "exit" 事件!');
  process.exit();
});

事件: 'reset'#

function (context) {}

function (context) {}

Emitted when the REPL's context is reset. This happens when you type .clear. If you start the repl with { useGlobal: true } then this event will never be emitted.

当REPL的上下文被重置时,该事件被分发。当你打.clear命令时这种情况就会发生。如果你以{ useGlobal: true }来启动repl,那么这个事件就永远不会被分发。

Example of listening for reset:

监听reset的例子:

// 当一个新的上下文被创建时,扩充这个上下文。
r.on('reset', function (context) {
  console.log('repl有一个新的上下文');
  someExtension.extend(context);
});

REPL 特性#

Inside the REPL, Control+D will exit. Multi-line expressions can be input. Tab completion is supported for both global and local variables.

在REPL里,Control+D会退出。可以输入多行表达式。对于全局变量和本地变量都支持自动缩进。

The special variable _ (underscore) contains the result of the last expression.

特殊变量 _ (下划线)储存了上一个表达式的结果。

> [ "a", "b", "c" ]
[ 'a', 'b', 'c' ]
> _.length
3
> _ += 1
4

The REPL provides access to any variables in the global scope. You can expose a variable to the REPL explicitly by assigning it to the context object associated with each REPLServer. For example:

REPL提供了访问global域里所有变量的权限。通过将一个变量赋值给与每一个REPLServer关联的context对象,你可以显式地将一个变量暴露给REPL。例如:

repl.start("> ").context.m = msg;

Things in the context object appear as local within the REPL:

context对象里的东西,会在REPL以本地变量的形式出现。

mjr:~$ node repl_test.js
> m
'message'

There are a few special REPL commands:

有几个特殊的REPL命令:

  • .break - While inputting a multi-line expression, sometimes you get lost or just don't care about completing it. .break will start over.
  • .clear - Resets the context object to an empty object and clears any multi-line expression.
  • .exit - Close the I/O stream, which will cause the REPL to exit.
  • .help - Show this list of special commands.
  • .save - Save the current REPL session to a file

    .save ./file/to/save.js

  • .load - Load a file into the current REPL session.

    .load ./file/to/load.js

  • .break - 当你输入一个多行表达式时,有时你会脑子突然断路或者你不想完成这个表达式了。break让你可以重头再来。

  • .clear - 重置context对象为一个空对象,并且清除所有的多行表达式。
  • .exit - 关闭I/O流,使得REPL退出。
  • .help - 显示这个特殊命令的列表。
  • .save - 将当前的REPL会话保存到一个文件

    .save ./file/to/save.js

  • .load - 将一个文件装载到当前的REPL会话。

    .load ./file/to/load.js

The following key combinations in the REPL have these special effects:

下面的组合键在REPL中有以下效果:

  • <ctrl>C - Similar to the .break keyword. Terminates the current command. Press twice on a blank line to forcibly exit.
  • <ctrl>D - Similar to the .exit keyword.

  • <ctrl>C - 与.break关键字类似。终止正在执行的命令。在一个空行连按两次会强制退出。

  • <ctrl>D - 与.exit关键字类似。

    执行 JavaScript#

    稳定度: 3 - 稳定

You can access this module with:

你可以这样引入此模块:

var vm = require('vm');

JavaScript code can be compiled and run immediately or compiled, saved, and run later.

JavaScript 代码可以被编译并立即执行,也可以在编译后保存,留到稍后执行。

vm.runInThisContext(code, [options])#

vm.runInThisContext() compiles code, runs it and returns the result. Running code does not have access to local scope, but does have access to the current global object.

vm.runInThisContext()code 进行编译、运行并返回结果。 被运行的代码没有对本地作用域 (local scope) 的访问权限,但是可以访问当前的 global 对象。

Example of using vm.runInThisContext and eval to run the same code:

使用 vm.runInThisContexteval 分别执行相同的代码:

// vmResult: 'vm', localVar: 'initial value'
// evalResult: 'eval', localVar: 'eval'

vm.runInThisContext does not have access to the local scope, so localVar is unchanged. eval does have access to the local scope, so localVar is changed.

vm.runInThisContext 无法访问本地作用域,因此 localVar 没有被改变。 eval 可以访问本地作用域,因此 localVar 被改变。

In this way vm.runInThisContext is much like an indirect eval call, e.g. (0,eval)('code'). However, it also has the following additional options:

这种情况下 vm.runInThisContext 可以看作一种 间接的 eval 调用, 如 (0,eval)('code')。但是 vm.runInThisContext 也提供下面几个额外的参数:

  • filename: allows you to control the filename that shows up in any stack traces produced.
  • displayErrors: whether or not to print any errors to stderr, with the line of code that caused them highlighted, before throwing an exception. Will capture both syntax errors from compiling code and runtime errors thrown by executing the compiled code. Defaults to true.
  • timeout: a number of milliseconds to execute code before terminating execution. If execution is terminated, an Error will be thrown.

  • filename: 允许您更改显示在站追踪 (stack trace) 中的文件名

  • displayErrors: 是否在抛出异常前输出带高亮错误代码行的错误信息到 stderr。 将会捕捉所有在编译 code 的过程中产生的语法错误以及执行过程中产生的运行时错误。 默认为 true
  • timeout: 以毫秒为单位规定 code 允许执行的时间。在执行过程中被终止时会有 Error 抛出。

vm.createContext([sandbox])#

If given a sandbox object, will "contextify" that sandbox so that it can be used in calls to vm.runInContext or script.runInContext. Inside scripts run as such, sandbox will be the global object, retaining all its existing properties but also having the built-in objects and functions any standard global object has. Outside of scripts run by the vm module, sandbox will be unchanged.

如提供 sandbox 对象则将沙箱 (sandbox) 对象 “上下文化 (contextify)” 供 vm.runInContext 或者 script.runInContext 使用。 以此方式运行的脚本将以 sandbox 作为全局对象,该对象将在保留其所有的属性的基础上拥有标准 全局对象 所拥有的内置对象和函数。 在由 vm 模块运行的脚本之外的地方 sandbox 将不会被改变。

If not given a sandbox object, returns a new, empty contextified sandbox object you can use.

如果没有提供沙箱对象,则返回一个新建的、没有任何对象被上下文化的可用沙箱。

This function is useful for creating a sandbox that can be used to run multiple scripts, e.g. if you were emulating a web browser it could be used to create a single sandbox representing a window's global object, then run all <script> tags together inside that sandbox.

此函数可用于创建可执行多个脚本的沙箱, 比如,在模拟浏览器的时候可以使用该函数创建一个用于表示 window 全局对象的沙箱, 并将所有 <script> 标签放入沙箱执行。

vm.isContext(sandbox)#

Returns whether or not a sandbox object has been contextified by calling vm.createContext on it.

返回沙箱对象是否已经通过 vm.createContext 上下文化 (contextified)

vm.runInContext(code, contextifiedSandbox, [options])#

vm.runInContext compiles code, then runs it in contextifiedSandbox and returns the result. Running code does not have access to local scope. The contextifiedSandbox object must have been previously contextified via vm.createContext; it will be used as the global object for code.

vm.runInContext 编译 code 放入 contextifiedSandbox 执行并返回执行结果。 被执行的代码对 本地作用域 (local scope) 没有访问权。 contextifiedSandbox 必须在使用前通过 vm.createContext 上下文化,用作 code 的全局对象。

vm.runInContext takes the same options as vm.runInThisContext.

vm.runInContext 使用与 vm.runInThisContext 相同的 选项 (options)

Example: compile and execute differnt scripts in a single existing context.

示例:在同一个上下文中编译并执行不同的脚本

// { globalVar: 1024 }

Note that running untrusted code is a tricky business requiring great care. vm.runInContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 vm.runInContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

vm.runInNewContext(code, [sandbox], [options])#

vm.runInNewContext compiles code, contextifies sandbox if passed or creates a new contextified sandbox if it's omitted, and then runs the code with the sandbox as the global object and returns the result.

vm.runInNewContext 首先编译 code,若提供 sandbox 则将 sandbox 上下文化,若未提供则创建一个新的沙箱并上下文化, 然后将代码放入沙箱作为全局对象的上下文内执行并返回结果。

vm.runInNewContext takes the same options as vm.runInThisContext.

vm.runInNewContext 使用与 vm.runInThisContext 相同的 选项 (options)

Example: compile and execute code that increments a global variable and sets a new one. These globals are contained in the sandbox.

示例: 编译并执行一段“自增一个全局变量然后创建一个全局变量”的代码。这些被操作的全局变量会被保存在沙箱中。

// { animal: 'cat', count: 3, name: 'kitty' }

Note that running untrusted code is a tricky business requiring great care. vm.runInNewContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 vm.runInNewContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

类: Script#

A class for holding precompiled scripts, and running them in specific sandboxes.

用于存放预编译脚本的类,可将预编译代码放入沙箱执行。

new vm.Script(code, options)#

Creating a new Script compiles code but does not run it. Instead, the created vm.Script object represents this compiled code. This script can be run later many times using methods below. The returned script is not bound to any global object. It is bound before each run, just for that run.

创建一个新的 Script 用于编译 code 但是不执行。使用被创建的 vm.Script 用来表示完成编译的代码。 这份可以在后面的代码中执行多次。 返回的脚本是未绑定任何全局对象 (上下文 context) 的,全局对象仅在每一次执行的时候被绑定,执行结束后即释放绑定。

The options when creating a script are:

创建脚本的选项 (option) 有:

  • filename: allows you to control the filename that shows up in any stack traces produced from this script.
  • displayErrors: whether or not to print any errors to stderr, with the line of code that caused them highlighted, before throwing an exception. Applies only to syntax errors compiling the code; errors while running the code are controlled by the options to the script's methods.

  • filename: 允许您更改显示在站追踪 (stack trace) 中的文件名

  • displayErrors: 是否在抛出异常前输出带高亮错误代码行的错误信息到 stderr。 仅捕捉所有在编译过程中产生的语法错误(运行时错误由运行脚本选项控制)。

script.runInThisContext([options])#

Similar to vm.runInThisContext but a method of a precompiled Script object. script.runInThisContext runs script's compiled code and returns the result. Running code does not have access to local scope, but does have access to the current global object.

类似 vm.runInThisContext 只是作为预编译的 Script 对象方法。 script.runInThisContext 执行被编译的 script 并返回结果。 被运行的代码没有对本地作用域 (local scope) 的访问权限,但是可以访问当前的 global 对象。

Example of using script.runInThisContext to compile code once and run it multiple times:

示例: 使用 script.runInThisContext 编译代码并多次执行:

// 1000

The options for running a script are:

运行脚本的选项 (option) 有:

  • displayErrors: whether or not to print any runtime errors to stderr, with the line of code that caused them highlighted, before throwing an exception. Applies only to runtime errors executing the code; it is impossible to create a Script instance with syntax errors, as the constructor will throw.
  • timeout: a number of milliseconds to execute the script before terminating execution. If execution is terminated, an Error will be thrown.

  • displayErrors: 是否在抛出异常前输出带高亮错误代码行的错误信息到 stderr。 仅捕捉所有执行过程中产生的运行时错误(语法错误会在 Script 示例创建时就发生,因此不可能创建出带语法错误的 Script 对象)。

  • timeout: 以毫秒为单位规定 code 允许执行的时间。在执行过程中被终止时会有 Error 抛出。

script.runInContext(contextifiedSandbox, [options])#

Similar to vm.runInContext but a method of a precompiled Script object. script.runInContext runs script's compiled code in contextifiedSandbox and returns the result. Running code does not have access to local scope.

类似 vm.runInContext 只是作为预编译的 Script 对象方法。 script.runInContextcontextifiedSandbox 中执行 script 编译出的代码,并返回结果。 被运行的代码没有对本地作用域 (local scope) 的访问权限。

script.runInContext takes the same options as script.runInThisContext.

script.runInContext 使用与 script.runInThisContext 相同的 选项 (option)。

Example: compile code that increments a global variable and sets one, then execute the code multiple times. These globals are contained in the sandbox.

示例: 编译一段“自增一个全局变量然后创建一个全局变量”的代码,然后多次执行此代码, 被操作的全局变量会被保存在沙箱中。

// { animal: 'cat', count: 12, name: 'kitty' }

Note that running untrusted code is a tricky business requiring great care. script.runInContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 script.runInContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

script.runInNewContext([sandbox], [options])#

Similar to vm.runInNewContext but a method of a precompiled Script object. script.runInNewContext contextifies sandbox if passed or creates a new contextified sandbox if it's omitted, and then runs script's compiled code with the sandbox as the global object and returns the result. Running code does not have access to local scope.

类似 vm.runInNewContext 但是作为预编译的 Script 对象方法。 若提供 sandboxscript.runInNewContextsandbox 上下文化,若未提供,则创建一个新的上下文化的沙箱, 然后将代码放入沙箱作为全局对象的上下文内执行并返回结果。

script.runInNewContext takes the same options as script.runInThisContext.

script.runInNewContext 使用与 script.runInThisContext 相同的 选项 (option)。

Example: compile code that sets a global variable, then execute the code multiple times in different contexts. These globals are set on and contained in the sandboxes.

示例: 编译一段“写入一个全局变量”的代码,然后将代码放入不同的上下文 (context) 执行,这些被操作的全局变量会被保存在沙箱中。

// [{ globalVar: 'set' }, { globalVar: 'set' }, { globalVar: 'set' }]

Note that running untrusted code is a tricky business requiring great care. script.runInNewContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 script.runInNewContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

子进程#

稳定度: 3 - 稳定

Node provides a tri-directional popen(3) facility through the child_process module.

Node 通过 child_process 模块提供了类似 popen(3) 的处理三向数据流(stdin/stdout/stderr)的功能。

It is possible to stream data through a child's stdin, stdout, and stderr in a fully non-blocking way. (Note that some programs use line-buffered I/O internally. That doesn't affect node.js but it means data you send to the child process is not immediately consumed.)

它能够以完全非阻塞的方式与子进程的 stdinstdoutstderr 以流式传递数据。(请注意,某些程序在内部使用行缓冲 I/O。这不会影响到 node.js,但您发送到子进程的数据不会被立即消费。)

To create a child process use require('child_process').spawn() or require('child_process').fork(). The semantics of each are slightly different, and explained below.

使用 require('child_process').spawn()require('child_process').fork() 创建子进程。这两种方法的语义有些区别,下文将会解释。

类: ChildProcess#

ChildProcess is an EventEmitter.

ChildProcess 是一个 EventEmitter

Child processes always have three streams associated with them. child.stdin, child.stdout, and child.stderr. These may be shared with the stdio streams of the parent process, or they may be separate stream objects which can be piped to and from.

子进程有三个与之关联的流:child.stdinchild.stdoutchild.stderr。它们可以共享父进程的 stdio 流,也可以作为独立的被导流的流对象。

The ChildProcess class is not intended to be used directly. Use the spawn() or fork() methods to create a Child Process instance.

ChildProcess 类不能直接被使用, 使用 spawn() 或者 fork() 方法创建一个 Child Process 实例。

事件: 'error'#

  • err Error Object the error.

  • err Error Object 错误。

Emitted when:

发生于:

  1. The process could not be spawned, or
  2. The process could not be killed, or
  3. Sending a message to the child process failed for whatever reason.
  1. 进程不能被创建, 或者
  2. 进程不能被终止掉, 或者
  3. 由任何原因引起的数据发送到子进程失败.

See also ChildProcess#kill() and ChildProcess#send().

参阅 ChildProcess#kill()ChildProcess#send()

事件: 'exit'#

  • code Number the exit code, if it exited normally.
  • signal String the signal passed to kill the child process, if it was killed by the parent.

  • code Number 假如进程正常退出,则为它的退出代码。

  • signal String 假如是被父进程终止,则为所传入的终止子进程的信号。

This event is emitted after the child process ends. If the process terminated normally, code is the final exit code of the process, otherwise null. If the process terminated due to receipt of a signal, signal is the string name of the signal, otherwise null.

这个事件是在子进程被结束的时候触发的. 假如进程被正常结束,‘code’就是退出进程的指令代码, 否则为'null'. 假如进程是由于接受到signal结束的, signal 就代表着信号的名称, 否则为null.

Note that the child process stdio streams might still be open.

注意子进程的 stdio 流可能仍为开启状态。

See waitpid(2).

参阅waitpid(2).

事件: 'close'#

  • code Number the exit code, if it exited normally.
  • signal String the signal passed to kill the child process, if it was killed by the parent.

  • code Number 假如进程正常退出,则为它的退出代码。

  • signal String 假如是被父进程终止,则为所传入的终止子进程的信号。

This event is emitted when the stdio streams of a child process have all terminated. This is distinct from 'exit', since multiple processes might share the same stdio streams.

这个事件会在一个子进程的所有stdio流被终止时触发, 这和'exit'事件有明显的不同,因为多进程有时候会共享同一个stdio流

事件: 'disconnect'#

This event is emitted after using the .disconnect() method in the parent or in the child. After disconnecting it is no longer possible to send messages. An alternative way to check if you can send messages is to see if the child.connected property is true.

在子进程或父进程中使用使用.disconnect()方法后,这个事件会被触发,在断开之后,就不可能再相互发送信息了。可以通过检查子进程的child.connected属性是否为true去检查是否可以发送信息

事件: 'message'#

  • message Object a parsed JSON object or primitive value
  • sendHandle Handle object a Socket or Server object

  • message Object 一个已解析的JSON对象或者原始类型值

  • sendHandle Handle object 一个socket 或者 server对象

Messages send by .send(message, [sendHandle]) are obtained using the message event.

通过.send()发送的信息可以通过监听'message'事件获取到

child.stdin#

  • Stream object

  • Stream object

A Writable Stream that represents the child process's stdin. Closing this stream via end() often causes the child process to terminate.

子进程的'stdin'是一个‘可写流’,通过end()方法关闭该可写流可以终止子进程,

If the child stdio streams are shared with the parent, then this will not be set.

假如子进程的stdio流与父线程共享,这个child.stdin不会被设置

child.stdout#

  • Stream object

  • Stream object

A Readable Stream that represents the child process's stdout.

子进程的stdout是个可读流.

If the child stdio streams are shared with the parent, then this will not be set.

假如子进程的stdio流与父线程共享,这个child.stdin不会被设置

child.stderr#

  • Stream object

  • Stream object

A Readable Stream that represents the child process's stderr.

子进程的stderr是一个可读流

If the child stdio streams are shared with the parent, then this will not be set.

假如子进程的stdio流与父线程共享,这个child.stdin不会被设置

child.pid#

  • Integer

  • Integer

The PID of the child process.

子进程的PID

Example:

示例:

console.log('Spawned child pid: ' + grep.pid);
grep.stdin.end();

child.kill([signal])#

  • signal String

  • signal String

Send a signal to the child process. If no argument is given, the process will be sent 'SIGTERM'. See signal(7) for a list of available signals.

发送一个信号给子线程. 假如没有给参数, 将会发送 'SIGTERM'. 参阅 signal(7) 查看所有可用的signals列表

// send SIGHUP to process
grep.kill('SIGHUP');

May emit an 'error' event when the signal cannot be delivered. Sending a signal to a child process that has already exited is not an error but may have unforeseen consequences: if the PID (the process ID) has been reassigned to another process, the signal will be delivered to that process instead. What happens next is anyone's guess.

当一个signal不能被传递的时候,会触发一个'error'事件, 发送一个信号到已终止的子线程不会发生错误,但是可能引起不可预见的后果, 假如该子进程的ID已经重新分配给了其他进程,signal将会被发送到其他进程上面,大家可以猜想到这发生什么后果。

Note that while the function is called kill, the signal delivered to the child process may not actually kill it. kill really just sends a signal to a process.

注意,当函数调用‘kill’, 传递给子进程的信号不会去终结子进程, ‘kill’实际上只是发送一个信号到进程而已。

See kill(2)

See kill(2)

child.send(message, [sendHandle])#

  • message Object
  • sendHandle Handle object

  • message Object

  • sendHandle Handle object

When using child_process.fork() you can write to the child using child.send(message, [sendHandle]) and messages are received by a 'message' event on the child.

当使用 child_process.fork() 你可以使用 child.send(message, [sendHandle])向子进程写数据 and 数据将通过子进程上的‘message’事件接受.

For example:

例如:

n.send({ hello: 'world' });

And then the child script, 'sub.js' might look like this:

然后是子进程脚本的代码, 'sub.js' 代码如下:

process.send({ foo: 'bar' });

In the child the process object will have a send() method, and process will emit objects each time it receives a message on its channel.

在子进程脚本中'process'对象有‘send()’方法, ‘process’每次通过它的信道接收到信息都会触发事件,信息以对象形式返回。

There is a special case when sending a {cmd: 'NODE_foo'} message. All messages containing a NODE_ prefix in its cmd property will not be emitted in the message event, since they are internal messages used by node core. Messages containing the prefix are emitted in the internalMessage event, you should by all means avoid using this feature, it is subject to change without notice.

不过发送{cmd: 'NODE_foo'} 信息是个比较特殊的情况. 所有在‘cmd’属性中包含 a NODE_前缀的信息将不会触发‘message’事件, 因为他们是由node 核心使用的内部信息. 相反这种信息会触发 internalMessage 事件, 你应该通过各种方法避免使用这种特性, 他改变的时候不会接收到通知.

The sendHandle option to child.send() is for sending a TCP server or socket object to another process. The child will receive the object as its second argument to the message event.

child.send()sendHandle 选项是用来发送一个TCP服务或者socket对象到另一个线程的,子进程将会接收这个参数作为‘message’事件的第二个参数。

Emits an 'error' event if the message cannot be sent, for example because the child process has already exited.

假如信息不能被发送,将会触发一个‘error’事件, 比如说因为子线程已经退出了。

例子: 发送一个server对象#

Here is an example of sending a server:

这里是一个发送一个server对象的例子:

// 创建一个handle对象,发送一个句柄.
var server = require('net').createServer();
server.on('connection', function (socket) {
  socket.end('handled by parent');
});
server.listen(1337, function() {
  child.send('server', server);
});

And the child would the receive the server object as:

同时子进程将会以如下方式接收到这个server对象:

process.on('message', function(m, server) {
  if (m === 'server') {
    server.on('connection', function (socket) {
      socket.end('handled by child');
    });
  }
});

Note that the server is now shared between the parent and child, this means that some connections will be handled by the parent and some by the child.

注意,server对象现在有父进程和子进程共享,这意味着某些连接将会被父进程和子进程处理。

For dgram servers the workflow is exactly the same. Here you listen on a message event instead of connection and use server.bind instead of server.listen. (Currently only supported on UNIX platforms.)

对‘dgram’服务器,工作流程是一样的, 你监听的是‘message’事件,而不是 ‘connection’事件, 使用‘server.bind’ ,而不是‘server.listen’.(当前仅在UNIX平台支持)

示例: 发送socket对象#

Here is an example of sending a socket. It will spawn two children and handle connections with the remote address 74.125.127.100 as VIP by sending the socket to a "special" child process. Other sockets will go to a "normal" process.

这是个发送socket的例子. 他将创建两个子线程 ,同时处理连接,这是通过使用远程地址 74.125.127.100 作为 VIP 发送socket到一个‘特殊’的子线程. 其他的socket将会发送到‘正常’的线程里.

  // if this is a VIP
  if (socket.remoteAddress === '74.125.127.100') {
    special.send('socket', socket);
    return;
  }
  // just the usual dudes
  normal.send('socket', socket);
});
server.listen(1337);

The child.js could look like this:

child.js 文件代码如下:

process.on('message', function(m, socket) {
  if (m === 'socket') {
    socket.end('You were handled as a ' + process.argv[2] + ' person');
  }
});

Note that once a single socket has been sent to a child the parent can no longer keep track of when the socket is destroyed. To indicate this condition the .connections property becomes null. It is also recommended not to use .maxConnections in this condition.

注意,一旦单个的socket被发送到子进程,当这个socket被删除之后,父进程将不再对它保存跟踪,这表明了这个条件下‘.connetions’属性将变成'null', 在这个条件下同时也不推荐时间‘.maxConnections’

child.disconnect()#

To close the IPC connection between parent and child use the child.disconnect() method. This allows the child to exit gracefully since there is no IPC channel keeping it alive. When calling this method the disconnect event will be emitted in both parent and child, and the connected flag will be set to false. Please note that you can also call process.disconnect() in the child process.

使用child.disconnect() 方法关闭父进程与子进程的IPC连接. 他让子进程非常优雅的退出,因为已经没有活跃的IPC信道. 当调用这个方法,‘disconnect’事件将会同时在父进程和子进程内被触发,‘connected’的标签将会被设置成‘flase’, 请注意,你也可以在子进程中调用‘process.disconnect()’

child_process.spawn(command, [args], [options])#

  • command String The command to run
  • args Array List of string arguments
  • options Object
    • cwd String Current working directory of the child process
    • stdio Array|String Child's stdio configuration. (See below)
    • customFds Array Deprecated File descriptors for the child to use for stdio. (See below)
    • env Object Environment key-value pairs
    • detached Boolean The child will be a process group leader. (See below)
    • uid Number Sets the user identity of the process. (See setuid(2).)
    • gid Number Sets the group identity of the process. (See setgid(2).)
  • return: ChildProcess object
  • command {String}要运行的命令
  • args {Array} 字符串参数列表
  • options {Object}
    • cwd {String} 子进程的当前的工作目录
    • stdio {Array|String} 子进程 stdio 配置. (参阅下文)
    • customFds {Array} Deprecated 作为子进程 stdio 使用的 文件标示符. (参阅下文)
    • env {Object} 环境变量的键值对
    • detached {Boolean} 子进程将会变成一个进程组的领导者. (参阅下文)
    • uid {Number} 设置用户进程的ID. (See setuid(2).)
    • gid {Number} 设置进程组的ID. (See setgid(2).)
  • 返回: {ChildProcess object}

Launches a new process with the given command, with command line arguments in args. If omitted, args defaults to an empty Array.

用给定的命令发布一个子进程,带有‘args’命令行参数,如果省略的话,‘args’默认为一个空数组

The third argument is used to specify additional options, which defaults to:

第三个参数被用来指定额外的设置,默认是:

{ cwd: undefined,
  env: process.env
}

cwd allows you to specify the working directory from which the process is spawned. Use env to specify environment variables that will be visible to the new process.

cwd允许你从被创建的子进程中指定一个工作目录. 使用 env 去指定在新进程中可用的环境变量.

Example of running ls -lh /usr, capturing stdout, stderr, and the exit code:

一个运行 ls -lh /usr的例子, 获取stdout, stderr, 和退出代码:

ls.on('close', function (code) {
  console.log('child process exited with code ' + code);
});

Example: A very elaborate way to run 'ps ax | grep ssh'

例子: 一个非常精巧的方法执行 'ps ax | grep ssh'

grep.on('close', function (code) {
  if (code !== 0) {
    console.log('grep process exited with code ' + code);
  }
});

Example of checking for failed exec:

检查执行错误的例子:

child.stderr.setEncoding('utf8');
child.stderr.on('data', function (data) {
  if (/^execvp\(\)/.test(data)) {
    console.log('Failed to start child process.');
  }
});

Note that if spawn receives an empty options object, it will result in spawning the process with an empty environment rather than using process.env. This due to backwards compatibility issues with a deprecated API.

注意,当在spawn过程中接收一个空对象,这会导致创建的进程使用空的环境变量而不是使用‘process.env’.这是由于与一个废弃API向后兼容的问题.

The 'stdio' option to child_process.spawn() is an array where each index corresponds to a fd in the child. The value is one of the following:

child_process.spawn() 中的 stdio 选项是一个数组,每个索引对应子进程中的一个文件标识符。可以是下列值之一:

  1. 'pipe' - Create a pipe between the child process and the parent process. The parent end of the pipe is exposed to the parent as a property on the child_process object as ChildProcess.stdio[fd]. Pipes created for fds 0 - 2 are also available as ChildProcess.stdin, ChildProcess.stdout and ChildProcess.stderr, respectively.
  2. 'ipc' - Create an IPC channel for passing messages/file descriptors between parent and child. A ChildProcess may have at most one IPC stdio file descriptor. Setting this option enables the ChildProcess.send() method. If the child writes JSON messages to this file descriptor, then this will trigger ChildProcess.on('message'). If the child is a Node.js program, then the presence of an IPC channel will enable process.send() and process.on('message').
  3. 'ignore' - Do not set this file descriptor in the child. Note that Node will always open fd 0 - 2 for the processes it spawns. When any of these is ignored node will open /dev/null and attach it to the child's fd.
  4. Stream object - Share a readable or writable stream that refers to a tty, file, socket, or a pipe with the child process. The stream's underlying file descriptor is duplicated in the child process to the fd that corresponds to the index in the stdio array.
  5. Positive integer - The integer value is interpreted as a file descriptor that is is currently open in the parent process. It is shared with the child process, similar to how Stream objects can be shared.
  6. null, undefined - Use default value. For stdio fds 0, 1 and 2 (in other words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the default is 'ignore'.
  1. 'pipe' -在子进程与父进程之间创建一个管道,管道的父进程端以 child_process 的属性的形式暴露给父进程,如 ChildProcess.stdio[fd]。 为 文件标识(fds) 0 - 2 建立的管道也可以通过 ChildProcess.stdin,ChildProcess.stdout 及 ChildProcess.stderr 分别访问。

  2. 'ipc' - 创建一个IPC通道以在父进程与子进程之间传递 消息/文件标识符。一个子进程只能有最多一个 IPC stdio 文件标识。 设置该选项激活 ChildProcess.send() 方法。如果子进程向此文件标识符写JSON消息,则会触发 ChildProcess.on("message")。 如果子进程是一个nodejs程序,那么IPC通道的存在会激活process.send()和process.on('message')

  3. 'ignore' - 不在子进程中设置该文件标识。注意,Node 总是会为其spawn的进程打开 文件标识(fd) 0 - 2。 当其中任意一项被 ignored,node 会打开 /dev/null 并将其附给子进程的文件标识(fd)。

  4. Stream 对象 - 与子进程共享一个与tty,文件,socket,或者管道(pipe)相关的可读或可写流。 该流底层(underlying)的文件标识在子进程中被复制给stdio数组索引对应的文件标识(fd)

  5. 正数 - 该整形值被解释为父进程中打开的文件标识符。他与子进程共享,和Stream被共享的方式相似。

  6. null, undefined - 使用默认值。 对于stdio fds 0,1,2(或者说stdin,stdout和stderr),pipe管道被建立。对于fd 3及往后,默认为ignore

As a shorthand, the stdio argument may also be one of the following strings, rather than an array:

作为快捷方式,stdio 参数除了数组也可以是下列字符串之一:

  • ignore - ['ignore', 'ignore', 'ignore']
  • pipe - ['pipe', 'pipe', 'pipe']
  • inherit - [process.stdin, process.stdout, process.stderr] or [0,1,2]

  • ignore - ['ignore', 'ignore', 'ignore']

  • pipe - ['pipe', 'pipe', 'pipe']
  • inherit - [process.stdin, process.stdout, process.stderr][0,1,2]

Example:

示例:

// 开启一个额外的 fd=4 来与提供 startd 风格接口的程序进行交互。
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });

If the detached option is set, the child process will be made the leader of a new process group. This makes it possible for the child to continue running after the parent exits.

如果 detached 选项被设置,则子进程会被作为新进程组的 leader。这使得子进程可以在父进程退出后继续运行。

By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given child, use the child.unref() method, and the parent's event loop will not include the child in its reference count.

缺省情况下,父进程会等待脱离了的子进程退出。要阻止父进程等待一个给出的子进程 child,使用 child.unref() 方法,则父进程的事件循环引用计数中将不会包含这个子进程。

Example of detaching a long-running process and redirecting its output to a file:

脱离一个长时间运行的进程并将它的输出重定向到一个文件的例子:

 child.unref();

When using the detached option to start a long-running process, the process will not stay running in the background unless it is provided with a stdio configuration that is not connected to the parent. If the parent's stdio is inherited, the child will remain attached to the controlling terminal.

当使用 detached 选项来启动一个长时间运行的进程,该进程不会在后台保持运行,除非向它提供了一个不连接到父进程的 stdio 配置。如果继承了父进程的 stdio,则子进程会继续附着在控制终端。

There is a deprecated option called customFds which allows one to specify specific file descriptors for the stdio of the child process. This API was not portable to all platforms and therefore removed. With customFds it was possible to hook up the new process' [stdin, stdout, stderr] to existing streams; -1 meant that a new stream should be created. Use at your own risk.

有一个已废弃的选项 customFds 允许指定特定文件描述符作为子进程的 stdio。该 API 无法移植到所有平台,因此被移除。使用 customFds 可以将新进程的 [stdin, stdout, stderr] 钩到已有流上;-1 表示创建新流。自己承担使用风险。

See also: child_process.exec() and child_process.fork()

参阅:child_process.exec()child_process.fork()

child_process.exec(command, [options], callback)#

  • command String The command to run, with space-separated arguments
  • options Object
    • cwd String Current working directory of the child process
    • env Object Environment key-value pairs
    • encoding String (Default: 'utf8')
    • shell String Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the -c switch on UNIX or /s /c on Windows. On Windows, command line parsing should be compatible with cmd.exe.)
    • timeout Number (Default: 0)
    • maxBuffer Number (Default: 200*1024)
    • killSignal String (Default: 'SIGTERM')
  • callback Function called with the output when process terminates
    • error Error
    • stdout Buffer
    • stderr Buffer
  • Return: ChildProcess object

  • command String 将要执行的命令,用空格分隔参数

  • options Object
    • cwd String 子进程的当前工作目录
    • env Object 环境变量键值对
    • encoding String 编码(缺省为 'utf8')
    • shell String 运行命令的 shell(UNIX 上缺省为 '/bin/sh',Windows 上缺省为 'cmd.exe'。该 shell 在 UNIX 上应当接受 -c 开关,在 Windows 上应当接受 /s /c 开关。在 Windows 中,命令行解析应当兼容 cmd.exe。)
    • timeout Number 超时(缺省为 0)
    • maxBuffer Number 最大缓冲(缺省为 200*1024)
    • killSignal String 结束信号(缺省为 'SIGTERM')
  • callback Function 进程结束时回调并带上输出
    • error Error
    • stdout Buffer
    • stderr Buffer
  • 返回:ChildProcess 对象

Runs a command in a shell and buffers the output.

在 shell 中执行一个命令并缓冲输出。

child = exec('cat *.js bad_file | wc -l',
  function (error, stdout, stderr) {
    console.log('stdout: ' + stdout);
    console.log('stderr: ' + stderr);
    if (error !== null) {
      console.log('exec error: ' + error);
    }
});

The callback gets the arguments (error, stdout, stderr). On success, error will be null. On error, error will be an instance of Error and err.code will be the exit code of the child process, and err.signal will be set to the signal that terminated the process.

回调参数为 (error, stdout, stderr)。当成功时,error 会是 null。当遇到错误时,error 会是一个 Error 实例,并且 err.code 会是子进程的退出代码,同时 err.signal 会被设置为结束进程的信号名。

There is a second optional argument to specify several options. The default options are

第二个可选的参数用于指定一些选项,缺省选项为:

{ encoding: 'utf8',
  timeout: 0,
  maxBuffer: 200*1024,
  killSignal: 'SIGTERM',
  cwd: null,
  env: null }

If timeout is greater than 0, then it will kill the child process if it runs longer than timeout milliseconds. The child process is killed with killSignal (default: 'SIGTERM'). maxBuffer specifies the largest amount of data allowed on stdout or stderr - if this value is exceeded then the child process is killed.

如果 timeout 大于 0,则当进程运行超过 timeout 毫秒后会被终止。子进程使用 killSignal 信号结束(缺省为 'SIGTERM')。maxBuffer 指定了 stdout 或 stderr 所允许的最大数据量,如果超出这个值则子进程会被终止。

child_process.execFile(file, args, options, callback)#

  • file String The filename of the program to run
  • args Array List of string arguments
  • options Object
    • cwd String Current working directory of the child process
    • env Object Environment key-value pairs
    • encoding String (Default: 'utf8')
    • timeout Number (Default: 0)
    • maxBuffer Number (Default: 200*1024)
    • killSignal String (Default: 'SIGTERM')
  • callback Function called with the output when process terminates
    • error Error
    • stdout Buffer
    • stderr Buffer
  • Return: ChildProcess object

  • file String 要运行的程序的文件名

  • args Array 字符串参数列表
  • options Object
    • cwd String 子进程的当前工作目录
    • env Object 环境变量键值对
    • encoding String 编码(缺省为 'utf8')
    • timeout Number 超时(缺省为 0)
    • maxBuffer Number 最大缓冲(缺省为 200*1024)
    • killSignal String 结束信号(缺省为 'SIGTERM')
  • callback Function 进程结束时回调并带上输出
    • error Error
    • stdout Buffer
    • stderr Buffer
  • 返回:ChildProcess 对象

This is similar to child_process.exec() except it does not execute a subshell but rather the specified file directly. This makes it slightly leaner than child_process.exec. It has the same options.

该方法类似于 child_process.exec(),但是它不会执行一个子 shell,而是直接执行所指定的文件。因此它稍微比 child_process.exec 精简,参数与之一致。

child_process.fork(modulePath, [args], [options])#

  • modulePath String The module to run in the child
  • args Array List of string arguments
  • options Object
    • cwd String Current working directory of the child process
    • env Object Environment key-value pairs
    • encoding String (Default: 'utf8')
    • execPath String Executable used to create the child process
  • Return: ChildProcess object

  • modulePath String 子进程中运行的模块

  • args Array 字符串参数列表
  • options Object
    • cwd String 子进程的当前工作目录
    • env Object 环境变量键值对
    • encoding String 编码(缺省为 'utf8')
    • execPath String 创建子进程的可执行文件
  • 返回:ChildProcess 对象

This is a special case of the spawn() functionality for spawning Node processes. In addition to having all the methods in a normal ChildProcess instance, the returned object has a communication channel built-in. See child.send(message, [sendHandle]) for details.

该方法是 spawn() 的特殊情景,用于派生 Node 进程。除了普通 ChildProcess 实例所具有的所有方法,所返回的对象还具有内建的通讯通道。详见 child.send(message, [sendHandle])

By default the spawned Node process will have the stdout, stderr associated with the parent's. To change this behavior set the silent property in the options object to true.

缺省情况下所派生的 Node 进程的 stdout、stderr 会关联到父进程。要更改该行为,可将 options 对象中的 silent 属性设置为 true

The child process does not automatically exit once it's done, you need to call process.exit() explicitly. This limitation may be lifted in the future.

子进程运行完成时并不会自动退出,您需要明确地调用 process.exit()。该限制可能会在未来版本里接触。

These child Nodes are still whole new instances of V8. Assume at least 30ms startup and 10mb memory for each new Node. That is, you cannot create many thousands of them.

这些子 Node 是全新的 V8 实例,假设每个新的 Node 需要至少 30 毫秒的启动时间和 10MB 内存,就是说您不能创建成百上千个这样的实例。

The execPath property in the options object allows for a process to be created for the child rather than the current node executable. This should be done with care and by default will talk over the fd represented an environmental variable NODE_CHANNEL_FD on the child process. The input and output on this fd is expected to be line delimited JSON objects.

options 对象中的 execPath 属性可以用非当前 node 可执行文件来创建子进程。这需要小心使用,并且缺省情况下会使用子进程上的 NODE_CHANNEL_FD 环境变量所指定的文件描述符来通讯。该文件描述符的输入和输出假定为以行分割的 JSON 对象。

断言 (assert)#

稳定度: 5 - 已锁定

This module is used for writing unit tests for your applications, you can access it with require('assert').

此模块主要用于对您的程序进行单元测试,要使用它,请 require('assert')

assert.fail(actual, expected, message, operator)#

Throws an exception that displays the values for actual and expected separated by the provided operator.

抛出异常:显示为被 operator (分隔符)所分隔的 actual (实际值)和 expected (期望值)。

assert(value, message), assert.ok(value, [message])#

Tests if value is truthy, it is equivalent to assert.equal(true, !!value, message);

测试结果是否为真(true),相当于 assert.equal(true, !!value, message);

assert.equal(actual, expected, [message])#

Tests shallow, coercive equality with the equal comparison operator ( == ).

浅测试, 强制相等就像使用相等操作符( == ).

assert.notEqual(actual, expected, [message])#

Tests shallow, coercive non-equality with the not equal comparison operator ( != ).

Tests shallow, coercive non-equality with the not equal comparison operator ( != ).

assert.deepEqual(actual, expected, [message])#

Tests for deep equality.

用于深度匹配测试。

assert.notDeepEqual(actual, expected, [message])#

Tests for any deep inequality.

用于深度非匹配测试。

assert.strictEqual(actual, expected, [message])#

Tests strict equality, as determined by the strict equality operator ( === )

用于严格相等匹配测试,由(===)的结果决定

assert.notStrictEqual(actual, expected, [message])#

Tests strict non-equality, as determined by the strict not equal operator ( !== )

严格不相等测试, 强制不相等就像使用严格不相等操作符( !== ).

assert.throws(block, [error], [message])#

Expects block to throw an error. error can be constructor, regexp or validation function.

输出一个错误的 blockerror 可以是构造函数,正则或者验证函数

Validate instanceof using constructor:

使用验证实例的构造函数

assert.throws(
  function() {
    throw new Error("错误值");
  },
  Error
);

Validate error message using RegExp:

用正则表达式验证错误消息。

assert.throws(
  function() {
    throw new Error("错误值");
  },
  /value/
);

Custom error validation:

自定义错误校验:

assert.throws(
  function() {
    throw new Error("Wrong value");
  },
  function(err) {
    if ( (err instanceof Error) && /value/.test(err) ) {
      return true;
    }
  },
  "unexpected error"
);

assert.doesNotThrow(block, [message])#

Expects block not to throw an error, see assert.throws for details.

预期 block 不会抛出错误,详见 assert.throws。

assert.ifError(value)#

Tests if value is not a false value, throws if it is a true value. Useful when testing the first argument, error in callbacks.

测试值是否不为 false,当为 true 时抛出。常用于回调中第一个 error 参数的检查。

TTY#

稳定度: 2 - 不稳定

The tty module houses the tty.ReadStream and tty.WriteStream classes. In most cases, you will not need to use this module directly.

tty 模块提供了 tty.ReadStreamtty.WriteStream 类。在大部分情况下,您都不会需要直接使用此模块。

When node detects that it is being run inside a TTY context, then process.stdin will be a tty.ReadStream instance and process.stdout will be a tty.WriteStream instance. The preferred way to check if node is being run in a TTY context is to check process.stdout.isTTY:

当 node 检测到它正运行于 TTY 上下文中时,process.stdin 将会是一个 tty.ReadStream 实例,且 process.stdout 也将会是一个 tty.WriteStream 实例。检查 node 是否运行于 TTY 上下文的首选方式是检查 process.stdout.isTTY

$ node -p -e "Boolean(process.stdout.isTTY)"
true
$ node -p -e "Boolean(process.stdout.isTTY)" | cat
false

tty.isatty(fd)#

Returns true or false depending on if the fd is associated with a terminal.

fd 关联于中端则返回 true,反之返回 false

tty.setRawMode(mode)#

Deprecated. Use tty.ReadStream#setRawMode() (i.e. process.stdin.setRawMode()) instead.

已废弃,请使用 tty.ReadStream#setRawMode()(如 process.stdin.setRawMode())。

类: ReadStream#

A net.Socket subclass that represents the readable portion of a tty. In normal circumstances, process.stdin will be the only tty.ReadStream instance in any node program (only when isatty(0) is true).

一个 net.Socket 子类,代表 TTY 的可读部分。通常情况下在所有 node 程序中 process.stdin 会是仅有的 tty.ReadStream 实例(进当 isatty(0) 为 true 时)。

rs.isRaw#

A Boolean that is initialized to false. It represents the current "raw" state of the tty.ReadStream instance.

一个 Boolean,初始为 false,代表 tty.ReadStream 实例的当前 "raw" 状态。

rs.setRawMode(mode)#

mode should be true or false. This sets the properties of the tty.ReadStream to act either as a raw device or default. isRaw will be set to the resulting mode.

mode 可以是 truefalse。它设定 tty.ReadStream 的属性表现为原始设备或缺省。isRaw 会被设置为结果模式。

类: WriteStream#

A net.Socket subclass that represents the writable portion of a tty. In normal circumstances, process.stdout will be the only tty.WriteStream instance ever created (and only when isatty(1) is true).

一个 net.Socket 子类,代表 TTY 的可写部分。通常情况下 process.stdout 会是仅有的 tty.WriteStream 实例(进当 isatty(1) 为 true 时)。

ws.columns#

A Number that gives the number of columns the TTY currently has. This property gets updated on "resize" events.

一个 `Number,表示 TTY 当前的列数。该属性会在 "resize" 事件中被更新。

ws.rows#

A Number that gives the number of rows the TTY currently has. This property gets updated on "resize" events.

一个 `Number,表示 TTY 当前的行数。该属性会在 "resize" 事件中被更新。

事件: 'resize'#

function () {}

function () {}

Emitted by refreshSize() when either of the columns or rows properties has changed.

refreshSize()columnsrows 属性被改变时触发。

process.stdout.on('resize', function() {
  console.log('screen size has changed!');
  console.log(process.stdout.columns + 'x' + process.stdout.rows);
});


process.stdout.on('resize', function() {
  console.log('屏幕大小已改变!');
  console.log(process.stdout.columns + 'x' + process.stdout.rows);
});

Zlib#

稳定度: 3 - 稳定

You can access this module with:

你可以这样引入此模块:

var zlib = require('zlib');

This provides bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw classes. Each class takes the same options, and is a readable/writable Stream.

这个模块提供了对Gzip/Gunzip, Deflate/Inflate和DeflateRaw/InflateRaw类的绑定。每一个类都可以接收相同的选项,并且本身也是一个可读写的Stream类。

例子#

Compressing or decompressing a file can be done by piping an fs.ReadStream into a zlib stream, then into an fs.WriteStream.

压缩或解压缩一个文件可以通过导流一个 fs.ReadStream 到一个 zlib 流,然后到一个 fs.WriteStream 来完成。

inp.pipe(gzip).pipe(out);

Compressing or decompressing data in one step can be done by using the convenience methods.

一步压缩或解压缩数据可以通过快捷方法来完成。

var buffer = new Buffer('eJzT0yMAAGTvBe8=', 'base64');
zlib.unzip(buffer, function(err, buffer) {
  if (!err) {
    console.log(buffer.toString());
  }
});

To use this module in an HTTP client or server, use the accept-encoding on requests, and the content-encoding header on responses.

要在 HTTP 客户端或服务器中使用此模块,请在请求和响应中使用 accept-encodingcontent-encoding 头。

Note: these examples are drastically simplified to show the basic concept. Zlib encoding can be expensive, and the results ought to be cached. See Memory Usage Tuning below for more information on the speed/memory/compression tradeoffs involved in zlib usage.

注意:这些例子只是极其简单地展示了基础的概念 Zlib 编码消耗非常大,结果需要缓存.看下面的内存调优 中更多的关于Zlib用法中 速度/内存/压缩 的权衡取舍。

  // 注意: 这不是一个不合格的 accept-encoding 解析器
  // 详见 http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3
  if (acceptEncoding.match(/\bdeflate\b/)) {
    response.writeHead(200, { 'content-encoding': 'deflate' });
    raw.pipe(zlib.createDeflate()).pipe(response);
  } else if (acceptEncoding.match(/\bgzip\b/)) {
    response.writeHead(200, { 'content-encoding': 'gzip' });
    raw.pipe(zlib.createGzip()).pipe(response);
  } else {
    response.writeHead(200, {});
    raw.pipe(response);
  }
}).listen(1337);

zlib.createGzip([options])#

Returns a new Gzip object with an options.

options 所给选项返回一个新的 Gzip 对象。

zlib.createGunzip([options])#

Returns a new Gunzip object with an options.

options 所给选项返回一个新的 Gunzip 对象。

zlib.createDeflate([options])#

Returns a new Deflate object with an options.

options 所给选项返回一个新的 Deflate 对象。

zlib.createInflate([options])#

Returns a new Inflate object with an options.

options 所给选项返回一个新的 Inflate 对象。

zlib.createDeflateRaw([options])#

Returns a new DeflateRaw object with an options.

options 所给选项返回一个新的 DeflateRaw 对象。

zlib.createInflateRaw([options])#

Returns a new InflateRaw object with an options.

options 所给选项返回一个新的 InflateRaw 对象。

zlib.createUnzip([options])#

Returns a new Unzip object with an options.

options 所给选项返回一个新的 Unzip 对象。

类: zlib.Zlib#

Not exported by the zlib module. It is documented here because it is the base class of the compressor/decompressor classes.

这个类未被 zlib 模块导出,编入此文档是因为它是其它压缩器/解压缩器的基类。

zlib.flush([kind], callback)#

kind defaults to zlib.Z_FULL_FLUSH.

kind 缺省为 zlib.Z_FULL_FLUSH

Flush pending data. Don't call this frivolously, premature flushes negatively impact the effectiveness of the compression algorithm.

写入缓冲数据。请勿轻易调用此方法,过早的写入会对压缩算法的作用产生影响。

zlib.params(level, strategy, callback)#

Dynamically update the compression level and compression strategy. Only applicable to deflate algorithm.

动态更新压缩级别和压缩策略。仅对 deflate 算法有效。

zlib.reset()#

Reset the compressor/decompressor to factory defaults. Only applicable to the inflate and deflate algorithms.

将压缩器/解压缩器重置为缺省值。仅对 inflate 和 deflate 算法有效。

类: zlib.Gzip#

Compress data using gzip.

使用 gzip 压缩数据。

类: zlib.Gunzip#

Decompress a gzip stream.

解压缩一个 gzip 流。

类: zlib.Deflate#

Compress data using deflate.

使用 deflate 压缩数据。

类: zlib.Inflate#

Decompress a deflate stream.

解压缩一个 deflate 流。

类: zlib.DeflateRaw#

Compress data using deflate, and do not append a zlib header.

使用 deflate 压缩数据,并且不附带 zlib 头。

类: zlib.InflateRaw#

Decompress a raw deflate stream.

解压缩一个原始 deflate 流。

类: zlib.Unzip#

Decompress either a Gzip- or Deflate-compressed stream by auto-detecting the header.

自动识别头部来解压缩一个以 gzip 或 deflate 压缩的流。

快捷方法#

All of these take a string or buffer as the first argument, an optional second argument to supply options to the zlib classes and will call the supplied callback with callback(error, result).

所有这些方法的第一个参数都可以是字符串或 Buffer;可选地可以将传给 zlib 类的选项作为第二个参数传入;回调格式为 callback(error, result)

zlib.deflate(buf, [options], callback)#

Compress a string with Deflate.

使用 Deflate 压缩一个字符串。

zlib.deflateRaw(buf, [options], callback)#

Compress a string with DeflateRaw.

使用 DeflateRaw 压缩一个字符串。

zlib.Gzip(buf, [options], callback)#

Compress a string with Gzip.

使用 Gzip 压缩一个字符串。

zlib.gunzip(buf, [options], callback)#

Decompress a raw Buffer with Gunzip.

使用 Gunzip 解压缩一个原始的 Buffer。

zlib.inflate(buf, [options], callback)#

Decompress a raw Buffer with Inflate.

使用 Inflate 解压缩一个原始的 Buffer。

zlib.inflateRaw(buf, [options], callback)#

Decompress a raw Buffer with InflateRaw.

使用 InflateRaw 解压缩一个原始的 Buffer。

zlib.unzip(buf, [options], callback)#

Decompress a raw Buffer with Unzip.

使用 Unzip 解压缩一个原始的 Buffer。

选项#

Each class takes an options object. All options are optional.

各个类都有一个选项对象。所有选项都是可选的。

Note that some options are only relevant when compressing, and are ignored by the decompression classes.

请注意有些选项仅对压缩有效,并会被解压缩类所忽略。

  • flush (default: zlib.Z_NO_FLUSH)
  • chunkSize (default: 16*1024)
  • windowBits
  • level (compression only)
  • memLevel (compression only)
  • strategy (compression only)
  • dictionary (deflate/inflate only, empty dictionary by default)

  • flush(缺省:zlib.Z_NO_FLUSH

  • chunkSize(缺省:16*1024)
  • windowBits
  • level(仅用于压缩)
  • memLevel(仅用于压缩)
  • strategy(仅用于压缩)
  • dictionary(仅用于 deflate/inflate,缺省为空目录)

See the description of deflateInit2 and inflateInit2 at

http://zlib.net/manual.html#Advanced for more information on these.

详情请参阅 http://zlib.net/manual.html#AdvanceddeflateInit2inflateInit2

内存使用调优#

From zlib/zconf.h, modified to node's usage:

来自 zlib/zconf.h,修改为 node 的用法:

The memory requirements for deflate are (in bytes):

deflate 的内存需求(按字节):

(1 << (windowBits+2)) +  (1 << (memLevel+9))

that is: 128K for windowBits=15 + 128K for memLevel = 8 (default values) plus a few kilobytes for small objects.

表示:windowBits = 15 的 128K + memLevel = 8 的 128K(缺省值)加上其它对象的若干 KB。

For example, if you want to reduce the default memory requirements from 256K to 128K, set the options to:

举个例子,如果您需要将缺省内存需求从 256K 减少到 128K,设置选项:

{ windowBits: 14, memLevel: 7 }

Of course this will generally degrade compression (there's no free lunch).

当然这通常会降低压缩等级(天底下没有免费午餐)。

The memory requirements for inflate are (in bytes)

inflate 的内存需求(按字节):

1 << windowBits

that is, 32K for windowBits=15 (default value) plus a few kilobytes for small objects.

表示 windowBits = 15(缺省值)的 32K 加上其它对象的若干 KB。

This is in addition to a single internal output slab buffer of size chunkSize, which defaults to 16K.

这是除了内部输出缓冲外 chunkSize 的大小,缺省为 16K。

The speed of zlib compression is affected most dramatically by the level setting. A higher level will result in better compression, but will take longer to complete. A lower level will result in less compression, but will be much faster.

zlib 压缩的速度主要受压缩级别 level 的影响。更高的压缩级别会有更好的压缩率,但也要花费更长时间。更低的压缩级别会有较低压缩率,但速度更快。

In general, greater memory usage options will mean that node has to make fewer calls to zlib, since it'll be able to process more data in a single write operation. So, this is another factor that affects the speed, at the cost of memory usage.

通常,使用更多内存的选项意味着 node 能减少对 zlib 的调用,因为单次 write操作能处理更多数据。因此,这是另一个影响速度和内存占用的因素。

常量#

All of the constants defined in zlib.h are also defined on require('zlib'). In the normal course of operations, you will not need to ever set any of these. They are documented here so that their presence is not surprising. This section is taken almost directly from the zlib documentation. See

http://zlib.net/manual.html#Constants for more details.

所有在 zlib.h 中定义的常量同样也定义在 require('zlib') 中。 在通常情况下您几乎不会用到它们,编入文档只是为了让您不会对它们的存在感到惊讶。该章节几乎完全来自 zlib 的文档。详见 http://zlib.net/manual.html#Constants

Allowed flush values.

允许的 flush 取值。

  • zlib.Z_NO_FLUSH
  • zlib.Z_PARTIAL_FLUSH
  • zlib.Z_SYNC_FLUSH
  • zlib.Z_FULL_FLUSH
  • zlib.Z_FINISH
  • zlib.Z_BLOCK
  • zlib.Z_TREES

  • zlib.Z_NO_FLUSH

  • zlib.Z_PARTIAL_FLUSH
  • zlib.Z_SYNC_FLUSH
  • zlib.Z_FULL_FLUSH
  • zlib.Z_FINISH
  • zlib.Z_BLOCK
  • zlib.Z_TREES

Return codes for the compression/decompression functions. Negative values are errors, positive values are used for special but normal events.

压缩/解压缩函数的返回值。负数代表错误,正数代表特殊但正常的事件。

  • zlib.Z_OK
  • zlib.Z_STREAM_END
  • zlib.Z_NEED_DICT
  • zlib.Z_ERRNO
  • zlib.Z_STREAM_ERROR
  • zlib.Z_DATA_ERROR
  • zlib.Z_MEM_ERROR
  • zlib.Z_BUF_ERROR
  • zlib.Z_VERSION_ERROR

  • zlib.Z_OK

  • zlib.Z_STREAM_END
  • zlib.Z_NEED_DICT
  • zlib.Z_ERRNO
  • zlib.Z_STREAM_ERROR
  • zlib.Z_DATA_ERROR
  • zlib.Z_MEM_ERROR
  • zlib.Z_BUF_ERROR
  • zlib.Z_VERSION_ERROR

Compression levels.

压缩级别。

  • zlib.Z_NO_COMPRESSION
  • zlib.Z_BEST_SPEED
  • zlib.Z_BEST_COMPRESSION
  • zlib.Z_DEFAULT_COMPRESSION

  • zlib.Z_NO_COMPRESSION

  • zlib.Z_BEST_SPEED
  • zlib.Z_BEST_COMPRESSION
  • zlib.Z_DEFAULT_COMPRESSION

Compression strategy.

压缩策略。

  • zlib.Z_FILTERED
  • zlib.Z_HUFFMAN_ONLY
  • zlib.Z_RLE
  • zlib.Z_FIXED
  • zlib.Z_DEFAULT_STRATEGY

  • zlib.Z_FILTERED

  • zlib.Z_HUFFMAN_ONLY
  • zlib.Z_RLE
  • zlib.Z_FIXED
  • zlib.Z_DEFAULT_STRATEGY

Possible values of the data_type field.

data_type 字段的可能值。

  • zlib.Z_BINARY
  • zlib.Z_TEXT
  • zlib.Z_ASCII
  • zlib.Z_UNKNOWN

  • zlib.Z_BINARY

  • zlib.Z_TEXT
  • zlib.Z_ASCII
  • zlib.Z_UNKNOWN

The deflate compression method (the only one supported in this version).

deflate 压缩方法(该版本仅支持一种)。

  • zlib.Z_DEFLATED

  • zlib.Z_DEFLATED

For initializing zalloc, zfree, opaque.

初始化 zalloc/zfree/opaque。

  • zlib.Z_NULL
  • zlib.Z_NULL

操作系统#

稳定度: 4 - 冻结

Provides a few basic operating-system related utility functions.

提供一些基本的操作系统相关函数。

Use require('os') to access this module.

使用 require('os') 来调用这个模块。

os.tmpdir()#

Returns the operating system's default directory for temp files.

返回操作系统默认的临时文件目录

os.endianness()#

Returns the endianness of the CPU. Possible values are "BE" or "LE".

返回 CPU 的字节序,可能的是 "BE""LE"

os.hostname()#

Returns the hostname of the operating system.

返回操作系统的主机名。

os.type()#

Returns the operating system name.

返回操作系统名称。

os.platform()#

Returns the operating system platform.

返回操作系统平台

os.arch()#

Returns the operating system CPU architecture. Possible values are "x64", "arm" and "ia32".

返回操作系统 CPU 架构,可能的值有 "x64""arm""ia32"

os.release()#

Returns the operating system release.

返回操作系统的发行版本。

os.uptime()#

Returns the system uptime in seconds.

返回操作系统运行的时间,以秒为单位。

os.loadavg()#

Returns an array containing the 1, 5, and 15 minute load averages.

返回一个包含 1、5、15 分钟平均负载的数组。

os.totalmem()#

Returns the total amount of system memory in bytes.

返回系统内存总量,单位为字节。

os.freemem()#

Returns the amount of free system memory in bytes.

返回操作系统空闲内存量,单位是字节。

os.cpus()#

Returns an array of objects containing information about each CPU/core installed: model, speed (in MHz), and times (an object containing the number of milliseconds the CPU/core spent in: user, nice, sys, idle, and irq).

返回一个对象数组,包含所安装的每个 CPU/内核的信息:型号、速度(单位 MHz)、时间(一个包含 user、nice、sys、idle 和 irq 所使用 CPU/内核毫秒数的对象)。

Example inspection of os.cpus:

os.cpus 的示例:

[ { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 252020,
       nice: 0,
       sys: 30340,
       idle: 1070356870,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 306960,
       nice: 0,
       sys: 26980,
       idle: 1071569080,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 248450,
       nice: 0,
       sys: 21750,
       idle: 1070919370,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 256880,
       nice: 0,
       sys: 19430,
       idle: 1070905480,
       irq: 20 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 511580,
       nice: 20,
       sys: 40900,
       idle: 1070842510,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 291660,
       nice: 0,
       sys: 34360,
       idle: 1070888000,
       irq: 10 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 308260,
       nice: 0,
       sys: 55410,
       idle: 1071129970,
       irq: 880 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 266450,
       nice: 1480,
       sys: 34920,
       idle: 1072572010,
       irq: 30 } } ]

os.networkInterfaces()#

Get a list of network interfaces:

获取网络接口的一个列表信息:

{ lo:
   [ { address: '127.0.0.1',
       netmask: '255.0.0.0',
       family: 'IPv4',
       mac: '00:00:00:00:00:00',
       internal: true },
     { address: '::1',
       netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
       family: 'IPv6',
       mac: '00:00:00:00:00:00',
       internal: true } ],
  eth0:
   [ { address: '192.168.1.108',
       netmask: '255.255.255.0',
       family: 'IPv4',
       mac: '01:02:03:0a:0b:0c',
       internal: false },
     { address: 'fe80::a00:27ff:fe4e:66a1',
       netmask: 'ffff:ffff:ffff:ffff::',
       family: 'IPv6',
       mac: '01:02:03:0a:0b:0c',
       internal: false } ] }

os.EOL#

A constant defining the appropriate End-of-line marker for the operating system.

一个定义了操作系统的一行结束的标识的常量。

调试器#

稳定度: 3 - 稳定

V8 comes with an extensive debugger which is accessible out-of-process via a simple TCP protocol. Node has a built-in client for this debugger. To use this, start Node with the debug argument; a prompt will appear:

V8 提供了一个强大的调试器,可以通过 TCP 协议从外部访问。Node 内建了这个调试器的客户端。要使用调试器,以 debug 参数启动 Node,出现提示符:

% node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
  1 x = 5;
  2 setTimeout(function () {
  3   debugger;
debug>

Node's debugger client doesn't support the full range of commands, but simple step and inspection is possible. By putting the statement debugger; into the source code of your script, you will enable a breakpoint.

Node 的调试器客户端并未完整支持所有命令,但简单的步进和检查是可行的。通过脚本的源代码中放置 debugger; 语句,您便可启用一个断点。

For example, suppose myscript.js looked like this:

比如,假设有一个类似这样的 myscript.js

// myscript.js
x = 5;
setTimeout(function () {
  debugger;
  console.log("world");
}, 1000);
console.log("hello");

Then once the debugger is run, it will break on line 4.

那么,当调试器运行时,它会在第 4 行中断:

% node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
  1 x = 5;
  2 setTimeout(function () {
  3   debugger;
debug> cont
< hello
break in /home/indutny/Code/git/indutny/myscript.js:3
  1 x = 5;
  2 setTimeout(function () {
  3   debugger;
  4   console.log("world");
  5 }, 1000);
debug> next
break in /home/indutny/Code/git/indutny/myscript.js:4
  2 setTimeout(function () {
  3   debugger;
  4   console.log("world");
  5 }, 1000);
  6 console.log("hello");
debug> repl
Press Ctrl + C to leave debug repl
> x
5
> 2+2
4
debug> next
< world
break in /home/indutny/Code/git/indutny/myscript.js:5
  3   debugger;
  4   console.log("world");
  5 }, 1000);
  6 console.log("hello");
  7
debug> quit
%

The repl command allows you to evaluate code remotely. The next command steps over to the next line. There are a few other commands available and more to come. Type help to see others.

repl 命令允许您远程执行代码;next 命令步进到下一行。此外还有一些其它命令,输入 help 查看。

监视器#

You can watch expression and variable values while debugging your code. On every breakpoint each expression from the watchers list will be evaluated in the current context and displayed just before the breakpoint's source code listing.

调试代码时您可以监视表达式或变量值。在每个断点中监视器列表中的各个表达式会被以当前上下文执行,并在断点的源代码前显示。

To start watching an expression, type watch("my_expression"). watchers prints the active watchers. To remove a watcher, type unwatch("my_expression").

输入 watch("my_expression") 开始监视一个表达式;watchers 显示活动监视器;unwatch("my_expression") 移除一个监视器。

命令参考#

步进#

  • cont, c - Continue execution
  • next, n - Step next
  • step, s - Step in
  • out, o - Step out
  • pause - Pause running code (like pause button in Developer Tools)

  • cont, c - 继续执行

  • next, n - Step next
  • step, s - Step in
  • out, o - Step out
  • pause - 暂停执行代码(类似开发者工具中的暂停按钮)

断点#

  • setBreakpoint(), sb() - Set breakpoint on current line
  • setBreakpoint(line), sb(line) - Set breakpoint on specific line
  • setBreakpoint('fn()'), sb(...) - Set breakpoint on a first statement in functions body
  • setBreakpoint('script.js', 1), sb(...) - Set breakpoint on first line of script.js
  • clearBreakpoint, cb(...) - Clear breakpoint

  • setBreakpoint(), sb() - 在当前行设置断点

  • setBreakpoint(line), sb(line) - 在指定行设置断点
  • setBreakpoint('fn()'), sb(...) - 在函数体的第一条语句设置断点
  • setBreakpoint('script.js', 1), sb(...) - 在 script.js 的第一行设置断点
  • clearBreakpoint, cb(...) - 清除断点

It is also possible to set a breakpoint in a file (module) that isn't loaded yet:

在一个尚未被加载的文件(模块)中设置断点也是可行的:

% ./node debug test/fixtures/break-in-module/main.js
< debugger listening on port 5858
connecting to port 5858... ok
break in test/fixtures/break-in-module/main.js:1
  1 var mod = require('./mod.js');
  2 mod.hello();
  3 mod.hello();
debug> setBreakpoint('mod.js', 23)
Warning: script 'mod.js' was not loaded yet.
  1 var mod = require('./mod.js');
  2 mod.hello();
  3 mod.hello();
debug> c
break in test/fixtures/break-in-module/mod.js:23
 21
 22 exports.hello = function() {
 23   return 'hello from module';
 24 };
 25
debug>

信息#

  • backtrace, bt - Print backtrace of current execution frame
  • list(5) - List scripts source code with 5 line context (5 lines before and after)
  • watch(expr) - Add expression to watch list
  • unwatch(expr) - Remove expression from watch list
  • watchers - List all watchers and their values (automatically listed on each breakpoint)
  • repl - Open debugger's repl for evaluation in debugging script's context

  • backtrace, bt - 显示当前执行框架的回溯

  • list(5) - 显示脚本源代码的 5 行上下文(之前 5 行和之后 5 行)
  • watch(expr) - 向监视列表添加表达式
  • unwatch(expr) - 从监视列表移除表达式
  • watchers - 列出所有监视器和它们的值(每个断点会自动列出)
  • repl - 在所调试的脚本的上下文中打开调试器的 repl 执行代码

执行控制#

  • run - Run script (automatically runs on debugger's start)
  • restart - Restart script
  • kill - Kill script

  • run - 运行脚本(调试器开始时自动运行)

  • restart - 重新运行脚本
  • kill - 终止脚本

杂项#

  • scripts - List all loaded scripts
  • version - Display v8's version

  • scripts - 列出所有已加载的脚本

  • version - 显示 V8 的版本

高级使用#

The V8 debugger can be enabled and accessed either by starting Node with the --debug command-line flag or by signaling an existing Node process with SIGUSR1.

V8 调试器可以从两种方式启用和访问:以 --debug 命令行标志启动 Node;或者向已存在的 Node 进程发送 SIGUSR1 信号。

Once a process has been set in debug mode with this it can be connected to with the node debugger. Either connect to the pid or the URI to the debugger. The syntax is:

一旦一个进程进入了调试模式,它便可被 Node 调试器连接。调试器可以通过 pid 或 URI 来连接,语法是:

  • node debug -p <pid> - Connects to the process via the pid
  • node debug <URI> - Connects to the process via the URI such as localhost:5858
  • node debug -p <pid> - 通过 pid 连接进程
  • node debug <URI> - 通过类似 localhost:5858 的 URI 连接进程

集群#

稳定度: 1 - 实验性

A single instance of Node runs in a single thread. To take advantage of multi-core systems the user will sometimes want to launch a cluster of Node processes to handle the load.

单个 Node 实例运行在单个线程中。要发挥多核系统的能力,用户有时候需要启动一个 Node 进程集群来处理负载。

The cluster module allows you to easily create a network of processes that all share server ports.

集群模块允许你方便地创建一个共享服务器端口的进程网络。

  cluster.on('exit', function(worker, code, signal) {
    console.log('工作进程 ' + worker.process.pid + ' 被终止');
  });
} else {
  // 工作进程可以共享任意 TCP 连接
  // 本例中为 HTTP 服务器
  http.createServer(function(req, res) {
    res.writeHead(200);
    res.end("你好,操蛋的世界\n");
  }).listen(8000);
}

Running node will now share port 8000 between the workers:

现在,运行 node 将会在所有工作进程间共享 8000 端口:

% NODE_DEBUG=cluster node server.js
23521,Master Worker 23524 online
23521,Master Worker 23526 online
23521,Master Worker 23523 online
23521,Master Worker 23528 online

This feature was introduced recently, and may change in future versions. Please try it out and provide feedback.

这是一个近期推出的功能,在未来版本中可能会有所改变。请尝试并提供反馈。

Also note that, on Windows, it is not yet possible to set up a named pipe server in a worker.

还要注意的是,在 Windows 中尚不能在工作进程中建立一个被命名的管道服务器。

它是如何工作的#

The worker processes are spawned using the child_process.fork method, so that they can communicate with the parent via IPC and pass server handles back and forth.

工作进程是通过使用 child_process.fork 方法派生的,因此它们可以通过 IPC(进程间通讯)与父进程通讯并互相传递服务器句柄。

The cluster module supports two methods of distributing incoming connections.

集群模块支持两种分配传入连接的方式。

The first one (and the default one on all platforms except Windows), is the round-robin approach, where the master process listens on a port, accepts new connections and distributes them across the workers in a round-robin fashion, with some built-in smarts to avoid overloading a worker process.

第一种(同时也是除 Windows 外所有平台的缺省方式)为循环式:主进程监听一个端口,接受新连接,并以轮流的方式分配给工作进程,并以一些内建机制来避免单个工作进程的超载。

The second approach is where the master process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly.

第二种方式是,主进程建立监听嵌套字,并将它发送给感兴趣的工作进程,由工作进程直接接受传入连接。

The second approach should, in theory, give the best performance. In practice however, distribution tends to be very unbalanced due to operating system scheduler vagaries. Loads have been observed where over 70% of all connections ended up in just two processes, out of a total of eight.

第二种方式理论上有最好的性能。然而在实践中,由于操作系统的调度变幻莫测,分配往往十分不平衡。负载曾被观测到超过 70% 的连接结束于总共八个进程中的两个。

Because server.listen() hands off most of the work to the master process, there are three cases where the behavior between a normal node.js process and a cluster worker differs:

因为 server.listen() 将大部分工作交给了主进程,所以一个普通的node.js进程和一个集群工作进程会在三种情况下有所区别:

  1. server.listen({fd: 7}) Because the message is passed to the master, file descriptor 7 in the parent will be listened on, and the handle passed to the worker, rather than listening to the worker's idea of what the number 7 file descriptor references.
  2. server.listen(handle) Listening on handles explicitly will cause the worker to use the supplied handle, rather than talk to the master process. If the worker already has the handle, then it's presumed that you know what you are doing.
  3. server.listen(0) Normally, this will cause servers to listen on a random port. However, in a cluster, each worker will receive the same "random" port each time they do listen(0). In essence, the port is random the first time, but predictable thereafter. If you want to listen on a unique port, generate a port number based on the cluster worker ID.
  1. server.listen({fd: 7}) 由于消息被传递到主进程,父进程中的文件描述符 7 会被监听,并且句柄会被传递给工作进程,而不是监听工作进程中文件描述符 7 所引用的东西。
  2. server.listen(handle) 明确地监听一个句柄会使得工作进程使用所给句柄,而不是与主进程通讯。如果工作进程已经拥有了该句柄,则假定您知道您在做什么。
  3. server.listen(0) 通常,这会让服务器监听一个随机端口。然而,在集群中,各个工作进程每次 listen(0) 都会得到一样的“随机”端口。实际上,端口在第一次时是随机的,但在那之后却是可预知的。如果您想要监听一个唯一的端口,则请根据集群工作进程 ID 来生成端口号。

There is no routing logic in Node.js, or in your program, and no shared state between the workers. Therefore, it is important to design your program such that it does not rely too heavily on in-memory data objects for things like sessions and login.

由于在 Node.js 或您的程序中并没有路由逻辑,工作进程之间也没有共享的状态,因此在您的程序中,诸如会话和登录等功能应当被设计成不能太过依赖于内存中的数据对象。

Because workers are all separate processes, they can be killed or re-spawned depending on your program's needs, without affecting other workers. As long as there are some workers still alive, the server will continue to accept connections. Node does not automatically manage the number of workers for you, however. It is your responsibility to manage the worker pool for your application's needs.

由于工作进程都是独立的进程,因此它们会根据您的程序的需要被终止或重新派生,并且不会影响到其它工作进程。只要还有工作进程存在,服务器就会继续接受连接。但是,Node 不会自动为您管理工作进程的数量,根据您的程序所需管理工作进程池是您的责任。

cluster.schedulingPolicy#

The scheduling policy, either cluster.SCHED_RR for round-robin or cluster.SCHED_NONE to leave it to the operating system. This is a global setting and effectively frozen once you spawn the first worker or call cluster.setupMaster(), whatever comes first.

调度策略 cluster.SCHED_RR 表示轮流制,cluster.SCHED_NONE 表示由操作系统处理。这是一个全局设定,并且一旦您派生了第一个工作进程或调用了 cluster.setupMaster() 后便不可更改。

SCHED_RR is the default on all operating systems except Windows. Windows will change to SCHED_RR once libuv is able to effectively distribute IOCP handles without incurring a large performance hit.

SCHED_RR 是除 Windows 外所有操作系统上的缺省方式。只要 libuv 能够有效地分配 IOCP 句柄并且不产生巨大的性能损失,Windows 也将会更改为 SCHED_RR 方式。

cluster.schedulingPolicy can also be set through the NODE_CLUSTER_SCHED_POLICY environment variable. Valid values are "rr" and "none".

cluster.schedulingPolicy 也可以通过环境变量 NODE_CLUSTER_SCHED_POLICY 设定。有效值为 "rr""none"

cluster.settings#

  • Object

    • exec String file path to worker file. (Default=__filename)
    • args Array string arguments passed to worker. (Default=process.argv.slice(2))
    • silent Boolean whether or not to send output to parent's stdio. (Default=false)
  • Object

    • exec String 工作进程文件的路径。(缺省为 __filename
    • args Array 传递给工作进程的字符串参数。(缺省为 process.argv.slice(2)
    • silent Boolean 是否将输出发送到父进程的 stdio。(缺省为 false

All settings set by the .setupMaster is stored in this settings object. This object is not supposed to be changed or set manually, by you.

所有由 .setupMaster 设定的设置都会储存在此设置对象中。这个对象不应由您手动更改或设定。

集群的主进程(判断当前进程是否是主进程)#

  • Boolean

  • Boolean

True if the process is a master. This is determined by the process.env.NODE_UNIQUE_ID. If process.env.NODE_UNIQUE_ID is undefined, then isMaster is true.

如果进程为主进程则为 true。这是由 process.env.NODE_UNIQUE_ID 判断的,如果 process.env.NODE_UNIQUE_ID 为 undefined,则 isMastertrue

集群的主线程(判断当前线程是否是主线程)#

  • Boolean

  • Boolean

This boolean flag is true if the process is a worker forked from a master. If the process.env.NODE_UNIQUE_ID is set to a value, then isWorker is true.

如果当前进程是分支自主进程的工作进程,则该布尔标识的值为 true。如果 process.env.NODE_UNIQUE_ID 被设定为一个值,则 isWorkertrue

事件: 'fork'#

  • worker Worker object

  • worker Worker object

When a new worker is forked the cluster module will emit a 'fork' event. This can be used to log worker activity, and create you own timeout.

当一个新的工作进程被分支出来,cluster 模块会产生一个 'fork' 事件。这可被用于记录工作进程活动,以及创建您自己的超时判断。

cluster.on('fork', function(worker) {
  timeouts[worker.id] = setTimeout(errorMsg, 2000);
});
cluster.on('listening', function(worker, address) {
  clearTimeout(timeouts[worker.id]);
});
cluster.on('exit', function(worker, code, signal) {
  clearTimeout(timeouts[worker.id]);
  errorMsg();
});

事件: 'online'#

  • worker Worker object

  • worker Worker object

After forking a new worker, the worker should respond with a online message. When the master receives a online message it will emit such event. The difference between 'fork' and 'online' is that fork is emitted when the master tries to fork a worker, and 'online' is emitted when the worker is being executed.

分支出一个新的工作进程后,工作进程会响应一个在线消息。当主进程收到一个在线消息后,它会触发该事件。'fork' 和 'online' 的区别在于前者发生于主进程尝试分支出工作进程时,而后者发生于工作进程被执行时。

cluster.on('online', function(worker) {
  console.log("嘿嘿,工作进程完成分支并发出回应了");
});

事件: 'listening'#

  • worker Worker object
  • address Object

  • worker Worker object

  • address Object

When calling listen() from a worker, a 'listening' event is automatically assigned to the server instance. When the server is listening a message is send to the master where the 'listening' event is emitted.

当工作进程调用 listen() 时,一个 listening 事件会被自动分配到服务器实例中。当服务器处于监听时,一个消息会被发送到那个'listening'事件被分发的主进程。

The event handler is executed with two arguments, the worker contains the worker object and the address object contains the following connection properties: address, port and addressType. This is very useful if the worker is listening on more than one address.

事件处理器被执行时会带上两个参数。其中 worker 包含了工作进程对象,address 对象包含了下列连接属性:地址 address、端口号 port 和地址类型 addressType。如果工作进程监听多个地址,那么这些信息将十分有用。

cluster.on('listening', function(worker, address) {
  console.log("一个工作进程刚刚连接到 " + address.address + ":" + address.port);
});

事件: 'disconnect'#

  • worker Worker object

  • worker Worker object

When a workers IPC channel has disconnected this event is emitted. This will happen when the worker dies, usually after calling .kill().

当一个工作进程的 IPC 通道断开时此事件会发生。这发生于工作进程结束时,通常是调用 .kill() 之后。

When calling .disconnect(), there may be a delay between the disconnect and exit events. This event can be used to detect if the process is stuck in a cleanup or if there are long-living connections.

当调用 .disconnect() 后,disconnectexit 事件之间可能存在延迟。该事件可被用于检测进程是否被卡在清理过程或存在长连接。

cluster.on('disconnect', function(worker) {
  console.log('工作进程 #' + worker.id + ' 断开了连接');
});

事件: 'exit'#

  • worker Worker object
  • code Number the exit code, if it exited normally.
  • signal String the name of the signal (eg. 'SIGHUP') that caused the process to be killed.

  • worker Worker object

  • code Number 如果是正常退出则为退出代码。
  • signal String 使得进程被终止的信号的名称(比如 'SIGHUP')。

When any of the workers die the cluster module will emit the 'exit' event. This can be used to restart the worker by calling fork() again.

当任意工作进程被结束时,集群模块会分发exit 事件。通过再次调用fork()函数,可以使用这个事件来重启工作进程。

cluster.on('exit', function(worker, code, signal) {
  var exitCode = worker.process.exitCode;
  console.log('工作进程 ' + worker.process.pid + ' 被结束('+exitCode+')。正在重启...');
  cluster.fork();
});

事件: 'setup'#

  • worker Worker object

  • worker Worker object

When the .setupMaster() function has been executed this event emits. If .setupMaster() was not executed before fork() this function will call .setupMaster() with no arguments.

.setupMaster() 函数被执行时触发此事件。如果 .setupMaster()fork() 之前没被执行,那么它会不带参数调用 .setupMaster()

cluster.setupMaster([settings])#

  • settings Object

    • exec String file path to worker file. (Default=__filename)
    • args Array string arguments passed to worker. (Default=process.argv.slice(2))
    • silent Boolean whether or not to send output to parent's stdio. (Default=false)
  • settings Object

    • exec String 工作进程文件的路径。(缺省为 __filename
    • args Array 传给工作进程的字符串参数。(缺省为 process.argv.slice(2)
    • silent Boolean 是否将输出发送到父进程的 stdio。(缺省为 false

setupMaster is used to change the default 'fork' behavior. The new settings are effective immediately and permanently, they cannot be changed later on.

setupMaster 被用于更改缺省的 fork 行为。新的设置会立即永久生效,并且在之后不能被更改。

Example:

示例:

var cluster = require("cluster");
cluster.setupMaster({
  exec : "worker.js",
  args : ["--use", "https"],
  silent : true
});
cluster.fork();

cluster.fork([env])#

  • env Object Key/value pairs to add to child process environment.
  • return Worker object

  • env Object 添加到子进程环境变量中的键值对。

  • 返回 Worker object

Spawn a new worker process. This can only be called from the master process.

派生一个新的工作进程。这个函数只能在主进程中被调用。

cluster.disconnect([callback])#

  • callback Function called when all workers are disconnected and handlers are closed

  • callback Function 当所有工作进程都断开连接并且句柄被关闭时被调用

When calling this method, all workers will commit a graceful suicide. When they are disconnected all internal handlers will be closed, allowing the master process to die graceful if no other event is waiting.

调用此方法时,所有的工作进程都会优雅地将自己结束掉。当它们都断开连接时,所有的内部处理器都会被关闭,使得主进程可以可以在没有其它事件等待时优雅地结束。

The method takes an optional callback argument which will be called when finished.

该方法带有一个可选的回调参数,会在完成时被调用。

cluster.worker#

  • Object

  • Object

A reference to the current worker object. Not available in the master process.

对当前工作进程对象的引用。在主进程中不可用。

if (cluster.isMaster) {
  console.log('我是主进程');
  cluster.fork();
  cluster.fork();
} else if (cluster.isWorker) {
  console.log('我是工作进程 #' + cluster.worker.id);
}

cluster.workers#

  • Object

  • Object

A hash that stores the active worker objects, keyed by id field. Makes it easy to loop through all the workers. It is only available in the master process.

一个储存活动工作进程对象的哈希表,以 id 字段作为主键。它能被用作遍历所有工作进程,仅在主进程中可用。

// 遍历所有工作进程
function eachWorker(callback) {
  for (var id in cluster.workers) {
    callback(cluster.workers[id]);
  }
}
eachWorker(function(worker) {
  worker.send('向一线工作者们致以亲切问候!');
});

Should you wish to reference a worker over a communication channel, using the worker's unique id is the easiest way to find the worker.

如果您希望通过通讯通道引用一个工作进程,那么使用工作进程的唯一标识是找到那个工作进程的最简单的办法。

socket.on('data', function(id) {
  var worker = cluster.workers[id];
});

类: Worker#

A Worker object contains all public information and method about a worker. In the master it can be obtained using cluster.workers. In a worker it can be obtained using cluster.worker.

一个 Worker 对象包含了工作进程的所有公开信息和方法。可通过主进程中的 cluster.workers 或工作进程中的 cluster.worker 取得。

worker.id#

  • String

  • String

Each new worker is given its own unique id, this id is stored in the id.

每个新的工作进程都被赋予一个唯一的标识,这个标识被储存在 id 中。

While a worker is alive, this is the key that indexes it in cluster.workers

当一个工作进程可用时,这就是它被索引在 cluster.workers 中的主键。

worker.process#

  • ChildProcess object

  • ChildProcess object

All workers are created using child_process.fork(), the returned object from this function is stored in process.

所有工作进程都是使用 child_process.fork() 创建的,该函数返回的对象被储存在 process 中。

See: Child Process module

参考:Child Process 模块

worker.suicide#

  • Boolean

  • Boolean

This property is a boolean. It is set when a worker dies after calling .kill() or immediately after calling the .disconnect() method. Until then it is undefined.

该属性是一个布尔值。它会在工作进程调用 .kill() 后终止时或调用 .disconnect() 方法时被设置。在此之前它的值是 undefined

worker.send(message, [sendHandle])#

  • message Object
  • sendHandle Handle object

  • message Object

  • sendHandle Handle object

This function is equal to the send methods provided by child_process.fork(). In the master you should use this function to send a message to a specific worker. However in a worker you can also use process.send(message), since this is the same function.

该函数等同于 child_process.fork() 提供的 send 方法。在主进程中您可以用该函数向特定工作进程发送消息。当然,在工作进程中您也能使用 process.send(message),因为它们是同一个函数。

This example will echo back all messages from the master:

这个例子会回应来自主进程的所有消息:

} else if (cluster.isWorker) {
  process.on('message', function(msg) {
    process.send(msg);
  });
}

worker.kill([signal='SIGTERM'])#

  • signal String Name of the kill signal to send to the worker process.

  • signal String 发送给工作进程的终止信号的名称

This function will kill the worker, and inform the master to not spawn a new worker. The boolean suicide lets you distinguish between voluntary and accidental exit.

该函数会终止工作进程,并告知主进程不要派生一个新工作进程。布尔值 suicide 让您区分自行退出和意外退出。

// 终止工作进程
worker.kill();

This method is aliased as worker.destroy() for backwards compatibility.

该方法的别名是 worker.destroy(),以保持向后兼容。

worker.disconnect()#

When calling this function the worker will no longer accept new connections, but they will be handled by any other listening worker. Existing connection will be allowed to exit as usual. When no more connections exist, the IPC channel to the worker will close allowing it to die graceful. When the IPC channel is closed the disconnect event will emit, this is then followed by the exit event, there is emitted when the worker finally die.

调用该函数后工作进程将不再接受新连接,但新连接仍会被其它正在监听的工作进程处理。已存在的连接允许正常退出。当没有连接存在,连接到工作进程的 IPC 通道会被关闭,以便工作进程安全地结束。当 IPC 通道关闭时 disconnect 事件会被触发,然后则是工作进程最终结束时触发的 exit 事件。

Because there might be long living connections, it is useful to implement a timeout. This example ask the worker to disconnect and after 2 seconds it will destroy the server. An alternative would be to execute worker.kill() after 2 seconds, but that would normally not allow the worker to do any cleanup if needed.

由于可能存在长连接,通常会实现一个超时机制。这个例子会告知工作进程断开连接,并且在 2 秒后销毁服务器。另一个备选方案是 2 秒后执行 worker.kill(),但那样通常会使得工作进程没有机会进行必要的清理。

  process.on('message', function(msg) {
    if (msg === 'force kill') {
      server.close();
    }
  });
}

事件: 'message'#

  • message Object

  • message Object

This event is the same as the one provided by child_process.fork(). In the master you should use this event, however in a worker you can also use process.on('message')

该事件和 child_process.fork() 所提供的一样。在主进程中您应当使用该事件,而在工作进程中您也可以使用 process.on('message')

As an example, here is a cluster that keeps count of the number of requests in the master process using the message system:

举个例子,这里有一个集群,使用消息系统在主进程中统计请求的数量:

    // 将请求通知主进程
    process.send({ cmd: 'notifyRequest' });
  }).listen(8000);
}

事件: 'online'#

Same as the cluster.on('online') event, but emits only when the state change on the specified worker.

cluster.on('online') 事件一样,但仅当特定工作进程的状态改变时发生。

cluster.fork().on('online', function() {
  // 工作进程在线
});

事件: 'listening'#

  • address Object

  • address Object

Same as the cluster.on('listening') event, but emits only when the state change on the specified worker.

cluster.on('listening') 事件一样,但仅当特定工作进程的状态改变时发生。

cluster.fork().on('listening', function(address) {
  // 工作进程正在监听
});

事件: 'disconnect'#

Same as the cluster.on('disconnect') event, but emits only when the state change on the specified worker.

cluster.on('disconnect') 事件一样,但仅当特定工作进程的状态改变时发生。

cluster.fork().on('disconnect', function() {
  // 工作进程断开了连接
});

事件: 'exit'#

  • code Number the exit code, if it exited normally.
  • signal String the name of the signal (eg. 'SIGHUP') that caused the process to be killed.

  • code Number 如果是正常退出则为退出代码。

  • signal String 使得进程被终止的信号的名称(比如 'SIGHUP')。

Emitted by the individual worker instance, when the underlying child process is terminated. See child_process event: 'exit'.

由单个工作进程实例在底层子进程被结束时触发。详见子进程事件: 'exit'

var worker = cluster.fork();
worker.on('exit', function(code, signal) {
  if( signal ) {
    console.log("worker was killed by signal: "+signal);
  } else if( code !== 0 ) {
    console.log("worker exited with error code: "+code);
  } else {
    console.log("worker success!");
  }
});


var worker = cluster.fork();
worker.on('exit', function(code, signal) {
  if( signal ) {
    console.log("工人被信号 " + signal + " 杀掉了");
  } else if( code !== 0 ) {
    console.log("工作进程退出,错误码:" + code);
  } else {
    console.log("劳动者的胜利!");
  }
});

Smalloc#

稳定度: 1 - 实验性

smalloc.alloc(length[, receiver][, type])#

  • length Number <= smalloc.kMaxLength
  • receiver Object, Optional, Default: new Object
  • type Enum, Optional, Default: Uint8

  • length Number <= smalloc.kMaxLength

  • receiver Object 可选,缺省为 new Object
  • type Enum 可选,缺省为 Uint8

Returns receiver with allocated external array data. If no receiver is passed then a new Object will be created and returned.

返回 receiver 及所分配的外部数组数据。如果未传入 receiver 则会创建并返回一个新的 Object。

Buffers are backed by a simple allocator that only handles the assignation of external raw memory. Smalloc exposes that functionality.

Buffer 后端为一个只处理外部原始内存的分配的简易分配器所支撑。Smalloc 暴露了该功能。

This can be used to create your own Buffer-like classes. No other properties are set, so the user will need to keep track of other necessary information (e.g. length of the allocation).

这可用于创建你自己的类似 Buffer 的类。由于不会设置其它属性,因此使用者需要自行跟踪其它所需信息(比如所分配的长度 length)。

SimpleData.prototype = { /* ... */ };

It only checks if the receiver is an Object, and also not an Array. Because of this it is possible to allocate external array data to more than a plain Object.

它只检查 receiver 是否为一个非 Array 的 Object。因此,可以分配外部数组数据的不止纯 Object。

// { [Function allocMe] '0': 0, '1': 0, '2': 0 }

v8 does not support allocating external array data to an Array, and if passed will throw.

V8 不支持向一个 Array 分配外部数组数据,如果这么做将会抛出异常。

It's possible is to specify the type of external array data you would like. All possible options are listed in smalloc.Types. Example usage:

您可以指定您想要的外部数组数据的类型。所有可取的值都已在 smalloc.Types 中列出。使用示例:

// { '0': 0, '1': 0.1, '2': 0.2 }

smalloc.copyOnto(source, sourceStart, dest, destStart, copyLength);#

  • source Object with external array allocation
  • sourceStart Position to begin copying from
  • dest Object with external array allocation
  • destStart Position to begin copying onto
  • copyLength Length of copy

  • source 分配了外部数组的来源对象

  • sourceStart 从这个位置开始拷贝
  • dest 分配了外部数组的目标对象
  • destStart 拷贝到这个位置
  • copyLength 拷贝的长度

Copy memory from one external array allocation to another. No arguments are optional, and any violation will throw.

从一个外部数组向另一个拷贝内存。所有参数都是必填,否则将会抛出异常。

// { '0': 4, '1': 6, '2': 2, '3': 3 }

copyOnto automatically detects the length of the allocation internally, so no need to set any additional properties for this to work.

copyOnto 会在内部自动检测分配的长度,因此无需对此给出额外的参数。

smalloc.dispose(obj)#

  • obj Object

  • obj 对象

Free memory that has been allocated to an object via smalloc.alloc.

释放已使用 smalloc.alloc 分配到一个对象的内存。

// {}

This is useful to reduce strain on the garbage collector, but developers must be careful. Cryptic errors may arise in applications that are difficult to trace.

这对于减轻垃圾回收器的负担有所帮助,但开发者务必小心。难以跟踪的应用程序可能会发生奇怪的错误。

// 将导致:
// Error: source has no external array data

dispose() does not support Buffers, and will throw if passed.

dispose() 不支持 Buffer,传入将会抛出异常。

smalloc.kMaxLength#

Size of maximum allocation. This is also applicable to Buffer creation.

最大的分配大小。该值同时也适用于 Buffer 的创建。

smalloc.Types#

Enum of possible external array types. Contains:

外部数组类型的可取值,包含:

  • Int8
  • Uint8
  • Int16
  • Uint16
  • Int32
  • Uint32
  • Float
  • Double
  • Uint8Clamped
  • Int8
  • Uint8
  • Int16
  • Uint16
  • Int32
  • Uint32
  • Float
  • Double
  • Uint8Clamped

关于本文档#

The goal of this documentation is to comprehensively explain the Node.js API, both from a reference as well as a conceptual point of view. Each section describes a built-in module or high-level concept.

本文档的目标是从参考和概念的角度全面解释 Node.js 的 API,每章节描述一个内置模块或高级概念。

Where appropriate, property types, method arguments, and the arguments provided to event handlers are detailed in a list underneath the topic heading.

在某些情况下,属性类型、方法参数以及事件处理过程(handler)参数 会被列在主标题下的列表中。

Every .html document has a corresponding .json document presenting the same information in a structured manner. This feature is experimental, and added for the benefit of IDEs and other utilities that wish to do programmatic things with the documentation.

每一个 .html 文件都对应一份内容相同的结构化 .json 文档。这个特性现在还是实验性质的,希望能够为一些需要对文档进行操作的IDE或者其他工具提供帮助。

Every .html and .json file is generated based on the corresponding .markdown file in the doc/api/ folder in node's source tree. The documentation is generated using the tools/doc/generate.js program. The HTML template is located at doc/template.html.

每个 .html.json 文件都是基于源码的 doc/api/ 目录下的 .markdown 文件生成的。本文档使用 tools/doc/generate.js 这个程序来生产的。 HTML 模板文件为 doc/template.html

稳定度#

Throughout the documentation, you will see indications of a section's stability. The Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Some are so proven, and so relied upon, that they are unlikely to ever change at all. Others are brand new and experimental, or known to be hazardous and in the process of being redesigned.

在文档中,您可以了解每一个小节的稳定性。Node.js的API会有一些小的改变,当它成熟的时候,会有些部分相比另外一些来说更加可靠。有一部分接受过严格验证,被大量依赖的API几乎是不会改变的。也有一些是新增的、实验性的或者因被证实具有危险性而在重新设计中。

The stability indices are as follows:

稳定度定义如下

稳定度: 5 - 已锁定
除非发现严重缺陷,该代码不会被更改。请不要对此区域提出更改,更改提议将被拒绝。

JSON 输出#

稳定度: 1 - 实验性

Every HTML file in the markdown has a corresponding JSON file with the same data.

每个通过 markdown 生成的 HTML 文件都对应于一个具有相同数据的 JSON 文件。

This feature is new as of node v0.6.12. It is experimental.

该特性引入于 node v0.6.12。当前是测试性功能。

概述#

An example of a web server written with Node which responds with 'Hello World':

一个输出 “Hello World” 的简单 Web 服务器例子:

console.log('服务器已运行,请打开 http://127.0.0.1:8124/');

To run the server, put the code into a file called example.js and execute it with the node program

要运行这个服务器,先将程序保存为文件 “example.js”,并使用 node 命令来执行:

> node example.js
服务器已运行,请打开 http://127.0.0.1:8124/

All of the examples in the documentation can be run similarly.

所有的文档中的例子均使用相同的方式运行。

全局对象#

These objects are available in all modules. Some of these objects aren't actually in the global scope but in the module scope - this will be noted.

这些对象在所有模块中都是可用的。有些对象实际上并非在全局作用域内而是在模块作用域内——这种情况在以下文档中会特别指出。

global#

  • {Object} The global namespace object.

  • {Object} 全局命名空间对象。

In browsers, the top-level scope is the global scope. That means that in browsers if you're in the global scope var something will define a global variable. In Node this is different. The top-level scope is not the global scope; var something inside a Node module will be local to that module.

在浏览器中,顶级作用域就是全局作用域。这就是说,在浏览器中,如果当前是在全局作用域内,var something将会声明一个全局变量。在Node中则不同。顶级作用域并非全局作用域,在Node模块里的var something只属于那个模块。

process#

  • {Object}

  • {Object}

The process object. See the process object section.

进程对象。见 进程对象章节。

console#

  • {Object}

  • {Object}

Used to print to stdout and stderr. See the console section.

用于打印标准输出和标准错误。见控制台章节。

类: Buffer#

  • {Function}

  • {Function}

Used to handle binary data. See the buffer section

用于处理二进制数据。见Buffer章节。

require()#

  • {Function}

  • {Function}

To require modules. See the Modules section. require isn't actually a global but rather local to each module.

引入模块。见Modules章节。require实际上并非全局的而是各个模块本地的。

require.resolve()#

Use the internal require() machinery to look up the location of a module, but rather than loading the module, just return the resolved filename.

使用内部的require()机制查找模块的位置,但不加载模块,只返回解析过的模块文件路径。

require.cache#

  • Object

  • Object

Modules are cached in this object when they are required. By deleting a key value from this object, the next require will reload the module.

模块在引入时会缓存到该对象。通过删除该对象的键值,下次调用require时会重新加载相应模块。

require.extensions#

稳定度:0 - 已废弃
  • {Object}

  • {Object}

Instruct require on how to handle certain file extensions.

指导require方法如何处理特定的文件扩展名。

Process files with the extension .sjs as .js:

.sjs文件作为.js文件处理:

require.extensions['.sjs'] = require.extensions['.js'];

Deprecated In the past, this list has been used to load non-JavaScript modules into Node by compiling them on-demand. However, in practice, there are much better ways to do this, such as loading modules via some other Node program, or compiling them to JavaScript ahead of time.

已废弃 之前,该列表用于按需编译非JavaScript模块并加载进Node。然而,实践中有更好的方式实现该功能,如通过其他Node程序加载模块,或提前将他们编译成JavaScript代码。

Since the Module system is locked, this feature will probably never go away. However, it may have subtle bugs and complexities that are best left untouched.

由于模块系统的API已锁定,该功能可能永远不会去掉。改动它可能会产生细微的错误和复杂性,所以最好保持不变。

__filename#

  • {String}

  • {String}

The filename of the code being executed. This is the resolved absolute path of this code file. For a main program this is not necessarily the same filename used in the command line. The value inside a module is the path to that module file.

当前所执行代码文件的文件路径。这是该代码文件经过解析后的绝对路径。对于主程序来说,这和命令行中使用的文件路径未必是相同的。在模块中此变量值是该模块文件的路径。

Example: running node example.js from /Users/mjr

例子:在/Users/mjr下运行node example.js

console.log(__filename);
// /Users/mjr/example.js

__filename isn't actually a global but rather local to each module.

__filename实际上并非全局的而是各个模块本地的。

__dirname#

  • {String}

  • {String}

The name of the directory that the currently executing script resides in.

当前执行脚本所在目录的目录名。

Example: running node example.js from /Users/mjr

例子:在/Users/mjr下运行node example.js

console.log(__dirname);
// /Users/mjr

__dirname isn't actually a global but rather local to each module.

__dirname实际上并非全局的而是各个模块本地的。

module#

  • {Object}

  • {Object}

A reference to the current module. In particular module.exports is the same as the exports object. module isn't actually a global but rather local to each module.

当前模块的引用。特别地,module.exportsexports指向同一个对象。module实际上并非全局的而是各个模块本地的。

See the module system documentation for more information.

详情可见模块系统文档

exports#

A reference to the module.exports object which is shared between all instances of the current module and made accessible through require(). See module system documentation for details on when to use exports and when to use module.exports. exports isn't actually a global but rather local to each module.

module.exports对象的引用,该对象被当前模块的所有实例所共享,通过require()可访问该对象。 何时使用exports以及何时使用module.exports的详情可参见模块系统文档exports实际上并非全局的而是各个模块本地的。

See the module system documentation for more information.

详情可见模块系统文档

See the module section for more information.

关于模块系统的更多信息可参见模块

setTimeout(cb, ms)#

Run callback cb after at least ms milliseconds. The actual delay depends on external factors like OS timer granularity and system load.

至少ms毫秒后调用回调cb。实际延迟取决于外部因素,如操作系统定时器粒度及系统负载。

The timeout must be in the range of 1-2,147,483,647 inclusive. If the value is outside that range, it's changed to 1 millisecond. Broadly speaking, a timer cannot span more than 24.8 days.

超时值必须在1-2147483647的范围内(包含1和2147483647)。如果该值超出范围,则该值被当作1毫秒处理。一般来说,一个定时器不能超过24.8天。

Returns an opaque value that represents the timer.

返回一个代表该定时器的句柄值。

clearTimeout(t)#

Stop a timer that was previously created with setTimeout(). The callback will not execute.

停止一个之前通过setTimeout()创建的定时器。回调不会再被执行。

setInterval(cb, ms)#

Run callback cb repeatedly every ms milliseconds. Note that the actual interval may vary, depending on external factors like OS timer granularity and system load. It's never less than ms but it may be longer.

每隔ms毫秒重复调用回调cb。注意,取决于外部因素,如操作系统定时器粒度及系统负载,实际间隔可能会改变。它不会少于ms但可能比ms长。

The interval must be in the range of 1-2,147,483,647 inclusive. If the value is outside that range, it's changed to 1 millisecond. Broadly speaking, a timer cannot span more than 24.8 days.

间隔值必须在1-2147483647的范围内(包含1和2147483647)。如果该值超出范围,则该值被当作1毫秒处理。一般来说,一个定时器不能超过24.8天。

Returns an opaque value that represents the timer.

返回一个代表该定时器的句柄值。

clearInterval(t)#

Stop a timer that was previously created with setInterval(). The callback will not execute.

停止一个之前通过setInterval()创建的定时器。回调不会再被执行。

The timer functions are global variables. See the timers section.

定制器函数是全局变量。见定时器章节。

控制台#

稳定度: 4 - 冻结
  • {Object}

  • {Object}

For printing to stdout and stderr. Similar to the console object functions provided by most web browsers, here the output is sent to stdout or stderr.

用于向 stdout 和 stderr 打印字符。类似于大部分 Web 浏览器提供的 console 对象函数,在这里则是输出到 stdout 或 stderr。

The console functions are synchronous when the destination is a terminal or a file (to avoid lost messages in case of premature exit) and asynchronous when it's a pipe (to avoid blocking for long periods of time).

当输出目标是一个终端或者文件时,console函数是同步的(为了防止过早退出时丢失信息).当输出目标是一个管道时它们是异步的(防止阻塞过长时间).

That is, in the following example, stdout is non-blocking while stderr is blocking:

也就是说,在下面的例子中,stdout 是非阻塞的,而 stderr 则是阻塞的。

$ node script.js 2> error.log | tee info.log

In daily use, the blocking/non-blocking dichotomy is not something you should worry about unless you log huge amounts of data.

在日常使用中,您不需要太担心阻塞/非阻塞的差别,除非您需要记录大量数据。

console.log([data], [...])#

Prints to stdout with newline. This function can take multiple arguments in a printf()-like way. Example:

向 stdout 打印并新起一行。这个函数可以像 printf() 那样接受多个参数,例如:

console.log('count: %d', count);

If formatting elements are not found in the first string then util.inspect is used on each argument. See util.format() for more information.

如果在第一个字符串中没有找到格式化元素,那么 util.inspect 将被应用到各个参数。详见 util.format()

console.info([data], [...])#

Same as console.log.

console.log

console.error([data], [...])#

Same as console.log but prints to stderr.

console.log,但输出到 stderr。

console.warn([data], [...])#

Same as console.error.

console.error

console.dir(obj)#

Uses util.inspect on obj and prints resulting string to stdout. This function bypasses any custom inspect() function on obj.

obj 使用 util.inspect 并将结果字符串输出到 stdout。这个函数会忽略 obj 上的任何自定义 inspect()

console.time(label)#

Mark a time.

标记一个时间点。

console.timeEnd(label)#

Finish timer, record output. Example:

结束计时器,记录输出。例如:

console.time('100-elements');
for (var i = 0; i < 100; i++) {
  ;
}
console.timeEnd('100-elements');

console.trace(label)#

Print a stack trace to stderr of the current position.

打印当前位置的栈跟踪到 stderr。

console.assert(expression, [message])#

Same as assert.ok() where if the expression evaluates as false throw an AssertionError with message.

assert.ok() 相同,如果 expression 执行结果为 false 则抛出一个带上 message 的 AssertionError。

定时器#

稳定度: 5 - 已锁定

All of the timer functions are globals. You do not need to require() this module in order to use them.

所有的定时器函数都是全局变量. 你使用这些函数时不需要 require()模块.

setTimeout(callback, delay, [arg], [...])#

To schedule execution of a one-time callback after delay milliseconds. Returns a timeoutId for possible use with clearTimeout(). Optionally you can also pass arguments to the callback.

调度 delay 毫秒后的一次 callback 执行。返回一个可能被 clearTimeout() 用到的 timeoutId。可选地,您还能给回调传入参数。

It is important to note that your callback will probably not be called in exactly delay milliseconds - Node.js makes no guarantees about the exact timing of when the callback will fire, nor of the ordering things will fire in. The callback will be called as close as possible to the time specified.

请务必注意,您的回调有可能不会在准确的 delay 毫秒后被调用。Node.js 不保证回调被触发的精确时间和顺序。回调会在尽可能接近所指定时间上被调用。

clearTimeout(timeoutId)#

Prevents a timeout from triggering.

阻止一个 timeout 被触发。

setInterval(callback, delay, [arg], [...])#

To schedule the repeated execution of callback every delay milliseconds. Returns a intervalId for possible use with clearInterval(). Optionally you can also pass arguments to the callback.

调度每隔 delay 毫秒执行一次的 callback。返回一个可能被 clearInterval() 用到的 intervalId。可选地,您还能给回调传入参数。

clearInterval(intervalId)#

Stops a interval from triggering.

停止一个 interval 的触发。

unref()#

The opaque value returned by setTimeout and setInterval also has the method timer.unref() which will allow you to create a timer that is active but if it is the only item left in the event loop won't keep the program running. If the timer is already unrefd calling unref again will have no effect.

setTimeoutsetInterval 所返回的值同时具有 timer.unref() 方法,允许您创建一个活动的、但当它是事件循环中仅剩的项目时不会保持程序运行的定时器。如果定时器已被 unref,再次调用 unref 不会产生其它影响。

In the case of setTimeout when you unref you create a separate timer that will wakeup the event loop, creating too many of these may adversely effect event loop performance -- use wisely.

setTimeout 的情景中当您 unref 您会创建另一个定时器,并唤醒事件循环。创建太多这种定时器可能会影响事件循环的性能,慎用。

ref()#

If you had previously unref()d a timer you can call ref() to explicitly request the timer hold the program open. If the timer is already refd calling ref again will have no effect.

如果您之前 unref() 了一个定时器,您可以调用 ref() 来明确要求定时器让程序保持运行。如果定时器已被 ref 那么再次调用 ref 不会产生其它影响。

setImmediate(callback, [arg], [...])#

To schedule the "immediate" execution of callback after I/O events callbacks and before setTimeout and setInterval . Returns an immediateId for possible use with clearImmediate(). Optionally you can also pass arguments to the callback.

调度在所有 I/O 事件回调之后、setTimeoutsetInterval 之前“立即”执行 callback。返回一个可能被 clearImmediate() 用到的 immediateId。可选地,您还能给回调传入参数。

Callbacks for immediates are queued in the order in which they were created. The entire callback queue is processed every event loop iteration. If you queue an immediate from a inside an executing callback that immediate won't fire until the next event loop iteration.

immediate 的回调以它们创建的顺序被加入队列。整个回调队列会在每个事件循环迭代中被处理。如果您在一个正被执行的回调中添加 immediate,那么这个 immediate 在下一个事件循环迭代之前都不会被触发。

clearImmediate(immediateId)#

Stops an immediate from triggering.

停止一个 immediate 的触发。

Modules#

稳定度: 5 - 已锁定

Node has a simple module loading system. In Node, files and modules are in one-to-one correspondence. As an example, foo.js loads the module circle.js in the same directory.

Node有一个简易的模块加载系统。在node中,文件和模块是一一对应的。下面示例是foo.js加载同一目录下的circle.js

The contents of foo.js:

foo.js的内容:

var circle = require('./circle.js');
console.log( 'The area of a circle of radius 4 is '
           + circle.area(4));

The contents of circle.js:

circle.js的内容:

var PI = Math.PI;
exports.area = function (r) {
    return PI * r * r;
};
exports.circumference = function (r) {
    return 2 * PI * r;
};

The module circle.js has exported the functions area() and circumference(). To export an object, add to the special exports object.

circle.js模块输出了area()circumference()两个函数。要输出某个对象,把它加到exports这个特殊对象下即可。

Note that exports is a reference to module.exports making it suitable for augmentation only. If you are exporting a single item such as a constructor you will want to use module.exports directly instead.

注意,exportsmodule.exports的一个引用,只是为了用起来方便。当你想输出的是例如构造函数这样的单个项目,那么需要使用module.exports

// 正确输出构造函数
module.exports = MyConstructor;

Variables local to the module will be private. In this example the variable PI is private to circle.js.

模块内的本地变量是私有的。在这里例子中,PI这个变量就是circle.js私有的。

The module system is implemented in the require("module") module.

模块系统的实现在require("module")中。

循环#

When there are circular require() calls, a module might not be done being executed when it is returned.

当存在循环的require()调用时,一个模块可能在返回时并不会被执行。

Consider this situation:

考虑这样一种情形:

a.js:

a.js:

console.log('a starting');
exports.done = false;
var b = require('./b.js');
console.log('in a, b.done = %j', b.done);
exports.done = true;
console.log('a done');

b.js:

b.js:

console.log('b starting');
exports.done = false;
var a = require('./a.js');
console.log('in b, a.done = %j', a.done);
exports.done = true;
console.log('b done');

main.js:

main.js:

console.log('main starting');
var a = require('./a.js');
var b = require('./b.js');
console.log('in main, a.done=%j, b.done=%j', a.done, b.done);

When main.js loads a.js, then a.js in turn loads b.js. At that point, b.js tries to load a.js. In order to prevent an infinite loop an unfinished copy of the a.js exports object is returned to the b.js module. b.js then finishes loading, and its exports object is provided to the a.js module.

首先main.js加载a.js,接着a.js又去加载b.js。这时,b.js会尝试去加载a.js。为了防止无限的循环,a.js会返回一个unfinished copyb.js。然后b.js就会停止加载,并将其exports对象返回给a.js模块。

By the time main.js has loaded both modules, they're both finished. The output of this program would thus be:

这样main.js就把这两个模块都加载完成了。这段程序的输出如下:

$ node main.js
main starting
a starting
b starting
in b, a.done = false
b done
in a, b.done = true
a done
in main, a.done=true, b.done=true

If you have cyclic module dependencies in your program, make sure to plan accordingly.

如果你的程序中有循环的模块依赖,请确保工作正常。

核心模块#

Node has several modules compiled into the binary. These modules are described in greater detail elsewhere in this documentation.

Node中有一些模块是编译成二进制的。这些模块在本文档的其他地方有更详细的描述。

The core modules are defined in node's source in the lib/ folder.

核心模块定义在node源代码的lib/目录下。

Core modules are always preferentially loaded if their identifier is passed to require(). For instance, require('http') will always return the built in HTTP module, even if there is a file by that name.

require()总是会优先加载核心模块。例如,require('http')总是返回编译好的HTTP模块,而不管是否有这个名字的文件。

文件模块#

If the exact filename is not found, then node will attempt to load the required filename with the added extension of .js, .json, and then .node.

如果按文件名没有查找到,那么node会添加 .js.json后缀名,再尝试加载,如果还是没有找到,最后会加上.node的后缀名再次尝试加载。

.js files are interpreted as JavaScript text files, and .json files are parsed as JSON text files. .node files are interpreted as compiled addon modules loaded with dlopen.

.js 会被解析为Javascript纯文本文件,.json 会被解析为JSON格式的纯文本文件. .node 则会被解析为编译后的插件模块,由dlopen进行加载。

A module prefixed with '/' is an absolute path to the file. For example, require('/home/marco/foo.js') will load the file at /home/marco/foo.js.

模块以'/'为前缀,则表示绝对路径。例如,require('/home/marco/foo.js') ,加载的是/home/marco/foo.js这个文件。

A module prefixed with './' is relative to the file calling require(). That is, circle.js must be in the same directory as foo.js for require('./circle') to find it.

模块以'./'为前缀,则路径是相对于调用require()的文件。 也就是说,circle.js必须和foo.js在同一目录下,require('./circle')才能找到。

Without a leading '/' or './' to indicate a file, the module is either a "core module" or is loaded from a node_modules folder.

当没有以'/'或者'./'来指向一个文件时,这个模块要么是"核心模块",要么就是从node_modules文件夹加载的。

If the given path does not exist, require() will throw an Error with its code property set to 'MODULE_NOT_FOUND'.

如果指定的路径不存在,require()会抛出一个code属性为'MODULE_NOT_FOUND'的错误。

node_modules文件夹中加载#

If the module identifier passed to require() is not a native module, and does not begin with '/', '../', or './', then node starts at the parent directory of the current module, and adds /node_modules, and attempts to load the module from that location.

如果require()中的模块名不是一个本地模块,也没有以'/', '../', 或是 './'开头,那么node会从当前模块的父目录开始,尝试在它的/node_modules文件夹里加载相应模块。

If it is not found there, then it moves to the parent directory, and so on, until the root of the tree is reached.

如果没有找到,那么就再向上移动到父目录,直到到达顶层目录位置。

For example, if the file at '/home/ry/projects/foo.js' called require('bar.js'), then node would look in the following locations, in this order:

例如,如果位于'/home/ry/projects/foo.js'的文件调用了require('bar.js'),那么node查找的位置依次为:

  • /home/ry/projects/node_modules/bar.js
  • /home/ry/node_modules/bar.js
  • /home/node_modules/bar.js
  • /node_modules/bar.js

  • /home/ry/projects/node_modules/bar.js

  • /home/ry/node_modules/bar.js
  • /home/node_modules/bar.js
  • /node_modules/bar.js

This allows programs to localize their dependencies, so that they do not clash.

这就要求程序员应尽量把依赖放在就近的位置,以防崩溃。

Folders as Modules#

It is convenient to organize programs and libraries into self-contained directories, and then provide a single entry point to that library. There are three ways in which a folder may be passed to require() as an argument.

可以把程序和库放到一个单独的文件夹里,并提供单一入口来指向它。有三种方法,使一个文件夹可以作为require()的参数来加载。

The first is to create a package.json file in the root of the folder, which specifies a main module. An example package.json file might look like this:

首先是在文件夹的根目录创建一个叫做package.json的文件,它需要指定一个main模块。下面是一个package.json文件的示例。

{ "name" : "some-library",
  "main" : "./lib/some-library.js" }

If this was in a folder at ./some-library, then require('./some-library') would attempt to load ./some-library/lib/some-library.js.

示例中这个文件,如果是放在./some-library目录下面,那么require('./some-library')就将会去加载./some-library/lib/some-library.js

This is the extent of Node's awareness of package.json files.

This is the extent of Node's awareness of package.json files.

If there is no package.json file present in the directory, then node will attempt to load an index.js or index.node file out of that directory. For example, if there was no package.json file in the above example, then require('./some-library') would attempt to load:

如果目录里没有package.json这个文件,那么node就会尝试去加载这个路径下的index.js或者index.node。例如,若上面例子中,没有package.json,那么require('./some-library')就将尝试加载下面的文件:

  • ./some-library/index.js
  • ./some-library/index.node

  • ./some-library/index.js

  • ./some-library/index.node

Caching#

Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.

模块在第一次加载后会被缓存。这意味着(类似其他缓存)每次调用require('foo')的时候都会返回同一个对象,当然,必须是每次都解析到同一个文件。

Multiple calls to require('foo') may not cause the module code to be executed multiple times. This is an important feature. With it, "partially done" objects can be returned, thus allowing transitive dependencies to be loaded even when they would cause cycles.

Multiple calls to require('foo') may not cause the module code to be executed multiple times. This is an important feature. With it, "partially done" objects can be returned, thus allowing transitive dependencies to be loaded even when they would cause cycles.

If you want to have a module execute code multiple times, then export a function, and call that function.

如果你希望一个模块多次执行,那么就输出一个函数,然后调用这个函数。

Module Caching Caveats#

Modules are cached based on their resolved filename. Since modules may resolve to a different filename based on the location of the calling module (loading from node_modules folders), it is not a guarantee that require('foo') will always return the exact same object, if it would resolve to different files.

模块的缓存是依赖于解析后的文件名。由于随着调用的位置不同,可能解析到不同的文件(比如需从node_modules文件夹加载的情况),所以,如果解析到其他文件时,就不能保证require('foo')总是会返回确切的同一对象。

The module Object#

  • {Object}

  • {Object}

In each module, the module free variable is a reference to the object representing the current module. In particular module.exports is accessible via the exports module-global. module isn't actually a global but rather local to each module.

在每一个模块中,变量 module 是一个代表当前模块的对象的引用。 特别地,module.exports 可以通过全局模块对象 exports 获取到。 module 不是事实上的全局对象,而更像是每个模块内部的。

module.exports#

  • Object

  • Object

The module.exports object is created by the Module system. Sometimes this is not acceptable, many want their module to be an instance of some class. To do this assign the desired export object to module.exports. For example suppose we were making a module called a.js

module.exports 对象是通过模块系统产生的。有时候这是难以接受的,许多人想让他们的模块是某个类的实例。 因此,将要导出的对象赋值给 module.exports 。例如,假设我们有一个模块称之为 a.js

// Do some work, and after some time emit
// the 'ready' event from the module itself.
setTimeout(function() {
  module.exports.emit('ready');
}, 1000);

Then in another file we could do

那么,在另一个文件中我们可以这样写

var a = require('./a');
a.on('ready', function() {
  console.log('module a is ready');
});

Note that assignment to module.exports must be done immediately. It cannot be done in any callbacks. This does not work:

Note that assignment to module.exports must be done immediately. It cannot be done in any callbacks. This does not work:

x.js:

x.js:

setTimeout(function() {
  module.exports = { a: "hello" };
}, 0);

y.js:

y.js:

var x = require('./x');
console.log(x.a);

module.require(id)#

  • id String
  • Return: Object module.exports from the resolved module

  • id String

  • Return: Object 已解析模块的 module.exports

The module.require method provides a way to load a module as if require() was called from the original module.

module.require 方法提供了一种像 require() 一样从最初的模块加载一个模块的方法。

Note that in order to do this, you must get a reference to the module object. Since require() returns the module.exports, and the module is typically only available within a specific module's code, it must be explicitly exported in order to be used.

注意,为了这样做,你必须取得一个对 module 对象的引用。 require() 返回 module.exports,并且 module 是一个典型的只能在特定模块作用域内有效的变量,如果要使用它,就必须明确的导出。

module.id#

  • String

  • String

The identifier for the module. Typically this is the fully resolved filename.

用于区别模块的标识符。通常是完全解析后的文件名。

module.filename#

  • String

  • String

The fully resolved filename to the module.

模块完全解析后的文件名。

module.loaded#

  • Boolean

  • Boolean

Whether or not the module is done loading, or is in the process of loading.

不论该模块是否加载完毕,或者正在加载的过程中。

module.parent#

  • Module Object

  • Module Object

The module that required this one.

引入这个模块的模块。

module.children#

  • Array

  • Array

The module objects required by this one.

这个模块引入的所有模块对象。

总体来说...#

To get the exact filename that will be loaded when require() is called, use the require.resolve() function.

为了获取调用 require 加载的确切的文件名,使用 require.resolve() 函数。

Putting together all of the above, here is the high-level algorithm in pseudocode of what require.resolve does:

综上所述,下面用伪代码的高级算法形式表达了 require.resolve 是如何工作的:

NODE_MODULES_PATHS(START)
1. let PARTS = path split(START)
2. let ROOT = index of first instance of "node_modules" in PARTS, or 0
3. let I = count of PARTS - 1
4. let DIRS = []
5. while I > ROOT,
   a. if PARTS[I] = "node_modules" CONTINUE
   c. DIR = path join(PARTS[0 .. I] + "node_modules")
   b. DIRS = DIRS + DIR
   c. let I = I - 1
6. return DIRS

从全局文件夹加载#

If the NODE_PATH environment variable is set to a colon-delimited list of absolute paths, then node will search those paths for modules if they are not found elsewhere. (Note: On Windows, NODE_PATH is delimited by semicolons instead of colons.)

如果 NODE_PATH 环境变量设置为一个以冒号分割的绝对路径的列表, 找不到模块时 node 将会从这些路径中搜索模块。 (注意:在 windows 操作系统上,NODE_PATH 是以分号间隔的)

Additionally, node will search in the following locations:

此外,node 将会搜索以下地址:

  • 1: $HOME/.node_modules
  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

  • 1: $HOME/.node_modules

  • 2: $HOME/.node_libraries
  • 3: $PREFIX/lib/node

Where $HOME is the user's home directory, and $PREFIX is node's configured node_prefix.

$HOME 是用户的主目录,$PREFIX 是 node 里配置的 node_prefix

These are mostly for historic reasons. You are highly encouraged to place your dependencies locally in node_modules folders. They will be loaded faster, and more reliably.

这些大多是由于历史原因。强烈建议读者将所依赖的模块放到 node_modules 文件夹里。 这样加载的更快也更可靠。

访问主模块#

When a file is run directly from Node, require.main is set to its module. That means that you can determine whether a file has been run directly by testing

当 Node 直接运行一个文件时,require.main 就被设置为它的 module 。 也就是说你可以判断一个文件是否是直接被运行的

require.main === module

For a file foo.js, this will be true if run via node foo.js, but false if run by require('./foo').

对于一个 foo.js 文件,如果通过 node foo.js 运行是 true ,但是通过 require('./foo') 运行却是 false

Because module provides a filename property (normally equivalent to __filename), the entry point of the current application can be obtained by checking require.main.filename.

因为 module 提供了一个 filename 属性(通常等于 __filename), 所以当前程序的入口点可以通过 require.main.filename 来获取。

附录: 包管理技巧#

The semantics of Node's require() function were designed to be general enough to support a number of sane directory structures. Package manager programs such as dpkg, rpm, and npm will hopefully find it possible to build native packages from Node modules without modification.

Node 的 require() 函数的语义被设计的足够通用化,以支持各种常规目录结构。 包管理程序如 dpkg,rpm 和 npm 将不用修改就能够从 Node 模块构建本地包。

Below we give a suggested directory structure that could work:

接下来我们将给你一个可行的目录结构建议:

Let's say that we wanted to have the folder at /usr/lib/node/<some-package>/<some-version> hold the contents of a specific version of a package.

假设我们希望将一个包的指定版本放在 /usr/lib/node/<some-package>/<some-version> 目录中。

Packages can depend on one another. In order to install package foo, you may have to install a specific version of package bar. The bar package may itself have dependencies, and in some cases, these dependencies may even collide or form cycles.

包可以依赖于其他包。为了安装包 foo,可能需要安装包 bar 的一个指定版本。 包 bar 也可能有依赖关系,在某些情况下依赖关系可能发生冲突或者形成循环。

Since Node looks up the realpath of any modules it loads (that is, resolves symlinks), and then looks for their dependencies in the node_modules folders as described above, this situation is very simple to resolve with the following architecture:

因为 Node 会查找它所加载的模块的真实路径(也就是说会解析符号链接), 然后按照上文描述的方式在 node_modules 目录中寻找依赖关系,这种情形跟以下体系结构非常相像:

  • /usr/lib/node/foo/1.2.3/ - Contents of the foo package, version 1.2.3.
  • /usr/lib/node/bar/4.3.2/ - Contents of the bar package that foo depends on.
  • /usr/lib/node/foo/1.2.3/node_modules/bar - Symbolic link to /usr/lib/node/bar/4.3.2/.
  • /usr/lib/node/bar/4.3.2/node_modules/* - Symbolic links to the packages that bar depends on.

  • /usr/lib/node/foo/1.2.3/ - foo 包 1.2.3 版本的内容

  • /usr/lib/node/bar/4.3.2/ - foo 包所依赖的 bar 包的内容
  • /usr/lib/node/foo/1.2.3/node_modules/bar - 指向 /usr/lib/node/bar/4.3.2/ 的符号链接
  • /usr/lib/node/bar/4.3.2/node_modules/* - 指向 bar 包所依赖的包的符号链接

Thus, even if a cycle is encountered, or if there are dependency conflicts, every module will be able to get a version of its dependency that it can use.

因此即便存在循环依赖或依赖冲突,每个模块还是可以获得他所依赖的包的一个可用版本。

When the code in the foo package does require('bar'), it will get the version that is symlinked into /usr/lib/node/foo/1.2.3/node_modules/bar. Then, when the code in the bar package calls require('quux'), it'll get the version that is symlinked into /usr/lib/node/bar/4.3.2/node_modules/quux.

当 foo 包中的代码调用 require('bar'),将获得符号链接 /usr/lib/node/foo/1.2.3/node_modules/bar 指向的版本。 然后,当 bar 包中的代码调用 require('queue'),将会获得符号链接 /usr/lib/node/bar/4.3.2/node_modules/quux 指向的版本。

Furthermore, to make the module lookup process even more optimal, rather than putting packages directly in /usr/lib/node, we could put them in /usr/lib/node_modules/<name>/<version>. Then node will not bother looking for missing dependencies in /usr/node_modules or /node_modules.

此外,为了进一步优化模块搜索过程,不要将包直接放在 /usr/lib/node 目录中,而是将它们放在 /usr/lib/node_modules/<name>/<version> 目录中。 这样在依赖的包找不到的情况下,就不会一直寻找 /usr/node_modules 目录或 /node_modules 目录了。

In order to make modules available to the node REPL, it might be useful to also add the /usr/lib/node_modules folder to the $NODE_PATH environment variable. Since the module lookups using node_modules folders are all relative, and based on the real path of the files making the calls to require(), the packages themselves can be anywhere.

为了使模块在 node 的 REPL 中可用,你可能需要将 /usr/lib/node_modules 目录加入到 $NODE_PATH 环境变量中。 由于在 node_modules 目录中搜索模块使用的是相对路径,基于调用 require() 的文件所在真实路径,因此包本身可以放在任何位置。

Addons插件#

Addons are dynamically linked shared objects. They can provide glue to C and C++ libraries. The API (at the moment) is rather complex, involving knowledge of several libraries:

Addons插件就是动态连接库。它类似胶水,将c、c++和Node粘贴起来。它的API(目前来说)相当复杂,涉及到了几个类库的知识。

  • V8 JavaScript, a C++ library. Used for interfacing with JavaScript: creating objects, calling functions, etc. Documented mostly in the v8.h header file (deps/v8/include/v8.h in the Node source tree), which is also available online.

  • V8 JavaScript引擎,一个 C++ 类库. 用于和JavaScript进行交互的接口。 创建对象, 调用函数等. 文档大部分在这里: v8.h 头文件 (deps/v8/include/v8.h在Node源代码目录里), 也有可用的线上文档 线上. (译者:想要学习c++的addons插件编写,必须先了解v8的接口)

  • libuv, C event loop library. Anytime one needs to wait for a file descriptor to become readable, wait for a timer, or wait for a signal to be received one will need to interface with libuv. That is, if you perform any I/O, libuv will need to be used.

  • libuv, C语言编写的事件循环类库。任何时候需要等待一个文件描述符变为可读状态,等待一个定时器,或者等待一个接受信号都需要使用libuv类库的接口。也就是说,如果你执行任何I/O操作,libuv类库将会被用到。

  • Internal Node libraries. Most importantly is the node::ObjectWrap class which you will likely want to derive from.

  • 内部 Node 类库.最重要的接口就是 node::ObjectWrap 类,这个类你应该是最可能想要派生的。

  • Others. Look in deps/ for what else is available.

  • 其他.请参阅 deps/ 获得更多可用类库。

Node statically compiles all its dependencies into the executable. When compiling your module, you don't need to worry about linking to any of these libraries.

Node 静态编译了所有依赖到它的可执行文件中去了。当编译你的模块时,你不必担心无法连接上述那些类库。 (译者:换而言之,你在编译自己的addons插件时,只管在头部 #include <uv.h>,不必在binding.gyp中声明)

All of the following examples are available for download and may be used as a starting-point for your own Addon.

下面所有的例子都可以下载到: 下载 这或许能成为你学习和创作自己addon插件的起点。

Hello world(世界你好)#

To get started let's make a small Addon which is the C++ equivalent of the following JavaScript code:

作为开始,让我们用编写一个小的addon插件,这个addon插件的c++代码相当于下面的JavaScript代码。

module.exports.hello = function() { return 'world'; };

First we create a file hello.cc:

首先我们创建一个 hello.cc文件:

NODE_MODULE(hello, init)//译者:将addon插件名hello和上述init函数关联输出

Note that all Node addons must export an initialization function:

注意所有Node的addons插件都必须输出一个初始化函数:

void Initialize (Handle<Object> exports);
NODE_MODULE(module_name, Initialize)

There is no semi-colon after NODE_MODULE as it's not a function (see node.h).

NODE_MODULE之后没有分号,因为它不是一个函数(请参阅node.h

The module_name needs to match the filename of the final binary (minus the .node suffix).

这个module_name(模块名)需要和最后编译生成的2进制文件名(减去.node后缀名)相同。

The source code needs to be built into hello.node, the binary Addon. To do this we create a file called binding.gyp which describes the configuration to build your module in a JSON-like format. This file gets compiled by node-gyp.

源代码需要生成在hello.node,这个2进制addon插件中。 需要做到这些,我们要创建一个名为binding.gyp的文件,它描述了创建这个模块的配置,并且它的格式是类似JSON的。 文件将被命令:node-gyp 编译。

{
  "targets": [
    {
      "target_name": "hello", //译者:addon插件名,注意这里的名字必需和上面NODE_MODULE中的一致
      "sources": [ "hello.cc" ]  //译者:这是需要编译的源文件
    }
  ]
}

The next step is to generate the appropriate project build files for the current platform. Use node-gyp configure for that.

下一步是根据当前的操作系统平台,利用node-gyp configure命令,生成合适的项目文件。

Now you will have either a Makefile (on Unix platforms) or a vcxproj file (on Windows) in the build/ directory. Next invoke the node-gyp build command.

现在你会有一个Makefile (在Unix平台) 或者一个 vcxproj file (在Windows上),它们都在build/ 文件夹中. 然后执行命令 node-gyp build进行编译。 (译者:当然你可以执行 node-gyp rebuild一步搞定)

Now you have your compiled .node bindings file! The compiled bindings end up in build/Release/.

现在你已经有了编译好的 .node 文件了,这个编译好的绑定文件会在目录 build/Release/

You can now use the binary addon in a Node project hello.js by pointing require to the recently built hello.node module:

现在你可以使用这个2进制addon插件在Node项目hello.js 中了,通过指明require这个刚刚创建的hello.node模块使用它。

console.log(addon.hello()); // 'world'

Please see patterns below for further information or

https://github.com/arturadib/node-qt for an example in production.

请阅读下面的内容获得更多详情或者访问https://github.com/arturadib/node-qt获取一个生产环境的例子。

Addon patterns(插件方式)#

Below are some addon patterns to help you get started. Consult the online v8 reference for help with the various v8 calls, and v8's Embedder's Guide for an explanation of several concepts used such as handles, scopes, function templates, etc.

下面是一些帮助你开始编写addon插件的方式。参考这个在线的v8 手册用来帮助你调用各种v8接口, 然后是v8的 嵌入式开发向导 ,解释几个概念,如 handles, scopes,function templates等。

In order to use these examples you need to compile them using node-gyp. Create the following binding.gyp file:

为了能跑起来这些例子,你必须用 node-gyp 来编译他们。 创建一个binding.gyp 文件:

{
  "targets": [
    {
      "target_name": "addon",
      "sources": [ "addon.cc" ]
    }
  ]
}

In cases where there is more than one .cc file, simply add the file name to the sources array, e.g.:

事实上可以有多个 .cc 文件, 就简单的在 sources 数组里加上即可,例子:

"sources": ["addon.cc", "myexample.cc"]

Now that you have your binding.gyp ready, you can configure and build the addon:

现在你有了你的binding.gyp文件了,你可要开始执行configure 和 build 命令构建你的addon插件了

$ node-gyp configure build

Function arguments(函数参数)#

The following pattern illustrates how to read arguments from JavaScript function calls and return a result. This is the main and only needed source addon.cc:

下面的部分说明了如何从JavaScript的函数调用获得参数然后返回一个值。这是主要的内容并且仅需要源代码addon.cc

NODE_MODULE(addon, Init)

You can test it with the following JavaScript snippet:

你可以使用下面的JavaScript代码片段来测试它

console.log( 'This should be eight:', addon.add(3,5) );

Callbacks(回调)#

You can pass JavaScript functions to a C++ function and execute them from there. Here's addon.cc:

你可以传递JavaScript functions 到一个C++ function 并且执行他们,这里是 addon.cc文件:

NODE_MODULE(addon, Init)

Note that this example uses a two-argument form of Init() that receives the full module object as the second argument. This allows the addon to completely overwrite exports with a single function instead of adding the function as a property of exports.

注意这个例子对Init()使用了两个参数,将完整的 module 对象作为第二个参数传入。这允许addon插件完全的重写 exports,这样就可以用一个函数代替多个函数作为exports的属性了。

To test it run the following JavaScript snippet:

你可以使用下面的JavaScript代码片段来测试它

addon(function(msg){
  console.log(msg); // 'hello world'
});

Object factory(对象工厂)#

You can create and return new objects from within a C++ function with this addon.cc pattern, which returns an object with property msg that echoes the string passed to createObject():

在这个addon.cc文件里用一个c++函数,你可以创建并且返回一个新的对象,这个新的对象拥有一个msg的属性,它的值是通过createObject()方法传入的

NODE_MODULE(addon, Init)

To test it in JavaScript:

在js中测试如下:

var obj1 = addon('hello');
var obj2 = addon('world');
console.log(obj1.msg+' '+obj2.msg); // 'hello world'

Function factory(函数工厂)#

This pattern illustrates how to create and return a JavaScript function that wraps a C++ function:

这次将展示如何创建并返回一个JavaScript function函数,这个函数其实是通过c++包装的。

NODE_MODULE(addon, Init)

To test:

测试它:

var fn = addon();
console.log(fn()); // 'hello world'

Wrapping C++ objects(包装c++对象)#

Here we will create a wrapper for a C++ object/class MyObject that can be instantiated in JavaScript through the new operator. First prepare the main module addon.cc:

这里将创建一个被c++包裹的对象或类MyObject,它是可以在JavaScript中通过new操作符实例化的。 首先我们要准备主要的模块文件addon.cc:

NODE_MODULE(addon, InitAll)

Then in myobject.h make your wrapper inherit from node::ObjectWrap:

然后在myobject.h文件中创建你的包装类,它继承自 node::ObjectWrap:

#endif

And in myobject.cc implement the various methods that you want to expose. Here we expose the method plusOne by adding it to the constructor's prototype:

在文件 myobject.cc 可以实施各种你想要暴露给js的方法。 这里我们暴露方法名为 plusOne给就是,它表示将构造函数的属性加1.

  return scope.Close(Number::New(obj->counter_));
}

Test it with:

测试它:

var obj = new addon.MyObject(10);
console.log( obj.plusOne() ); // 11
console.log( obj.plusOne() ); // 12
console.log( obj.plusOne() ); // 13

Factory of wrapped objects(工厂包装对象)#

This is useful when you want to be able to create native objects without explicitly instantiating them with the new operator in JavaScript, e.g.

这是非常有用的,当你想创建原生的JavaScript对象时,又不想明确的使用JavaScript的new操作符。

var obj = addon.createObject();
// 用上面的方式代替下面的:
// var obj = new addon.Object();

Let's register our createObject method in addon.cc:

让我们注册在 addon.cc 文件中注册createObject方法:

NODE_MODULE(addon, InitAll)

In myobject.h we now introduce the static method NewInstance that takes care of instantiating the object (i.e. it does the job of new in JavaScript):

myobject.h文件中,我们现在介绍静态方法NewInstance,它能够实例化对象(举个例子,它的工作就像是 在JavaScript中的new` 操作符。)

#endif

The implementation is similar to the above in myobject.cc:

这里的处理方式和上面的 myobject.cc很像:

  return scope.Close(Number::New(obj->counter_));
}

Test it with:

测试它:

var obj2 = createObject(20);
console.log( obj2.plusOne() ); // 21
console.log( obj2.plusOne() ); // 22
console.log( obj2.plusOne() ); // 23

Passing wrapped objects around(传递包装的对象)#

In addition to wrapping and returning C++ objects, you can pass them around by unwrapping them with Node's node::ObjectWrap::Unwrap helper function. In the following addon.cc we introduce a function add() that can take on two MyObject objects:

除了包装和返回c++对象以外,你可以传递他们并且通过Node的node::ObjectWrap::Unwrap帮助函数解包装他们。 在下面的addon.cc 文件中,我们介绍了一个函数add(),它能够获取2个MyObject对象。

NODE_MODULE(addon, InitAll)

To make things interesting we introduce a public method in myobject.h so we can probe private values after unwrapping the object:

为了使事情变得有趣,我们在 myobject.h 采用一个公共的方法,所以我们能够在unwrapping解包装对象之后使用私有成员的值。

#endif

The implementation of myobject.cc is similar as before:

myobject.cc文件的处理方式和前面类似

  return scope.Close(instance);
}

Test it with:

测试它:

var obj1 = addon.createObject(10);
var obj2 = addon.createObject(20);
var result = addon.add(obj1, obj2);

console.log(result); // 30


console.log(result); // 30

process#

The process object is a global object and can be accessed from anywhere. It is an instance of EventEmitter.

process对象是一个全局对象,可以在任何地方访问到它。 它是EventEmitter的一个实例。

Exit Codes#

Node will normally exit with a 0 status code when no more async operations are pending. The following status codes are used in other cases:

Node 执行程序正常情况下会返回 0,这也意味着,包括所有“异步”在内的操作都已结束。(笔者注:linux terminal 下使用 echo $? 查看,win cmd 下使用 echo %ERRORLEVEL% 查看)除此之外的其他返回状态如下:

  • 1 Uncaught Fatal Exception - There was an uncaught exception, and it was not handled by a domain or an uncaughtException event handler.
  • 2 - Unused (reserved by Bash for builtin misuse)
  • 3 Internal JavaScript Parse Error - The JavaScript source code internal in Node's bootstrapping process caused a parse error. This is extremely rare, and generally can only happen during development of Node itself.
  • 4 Internal JavaScript Evaluation Failure - The JavaScript source code internal in Node's bootstrapping process failed to return a function value when evaluated. This is extremely rare, and generally can only happen during development of Node itself.
  • 5 Fatal Error - There was a fatal unrecoverable error in V8. Typically a message will be printed to stderr with the prefix FATAL ERROR.
  • 6 Non-function Internal Exception Handler - There was an uncaught exception, but the internal fatal exception handler function was somehow set to a non-function, and could not be called.
  • 7 Internal Exception Handler Run-Time Failure - There was an uncaught exception, and the internal fatal exception handler function itself threw an error while attempting to handle it. This can happen, for example, if a process.on('uncaughtException') or domain.on('error') handler throws an error.
  • 8 - Unused. In previous versions of Node, exit code 8 sometimes indicated an uncaught exception.
  • 9 - Invalid Argument - Either an unknown option was specified, or an option requiring a value was provided without a value.
  • 10 Internal JavaScript Run-Time Failure - The JavaScript source code internal in Node's bootstrapping process threw an error when the bootstrapping function was called. This is extremely rare, and generally can only happen during development of Node itself.
  • 12 Invalid Debug Argument - The --debug and/or --debug-brk options were set, but an invalid port number was chosen.
  • >128 Signal Exits - If Node receives a fatal signal such as SIGKILL or SIGHUP, then its exit code will be 128 plus the value of the signal code. This is a standard Unix practice, since exit codes are defined to be 7-bit integers, and signal exits set the high-order bit, and then contain the value of the signal code.

  • 1 未捕获的致命异常(Uncaught Fatal Exception) - There was an uncaught exception, and it was not handled by a domain or an uncaughtException event handler.

  • 2 - 未使用(Unused) (reserved by Bash for builtin misuse)
  • 3 解析错误(Internal JavaScript Parse Error) - The JavaScript source code internal in Node's bootstrapping process caused a parse error. This is extremely rare, and generally can only happen during development of Node itself.
  • 4 评估失败(Internal JavaScript Evaluation Failure) - The JavaScript source code internal in Node's bootstrapping process failed to return a function value when evaluated. This is extremely rare, and generally can only happen during development of Node itself.
  • 5 致命错误(Fatal Error) - There was a fatal unrecoverable error in V8. Typically a message will be printed to stderr with the prefix FATAL ERROR.
  • 6 未正确的异常处理(Non-function Internal Exception Handler) - There was an uncaught exception, but the internal fatal exception handler function was somehow set to a non-function, and could not be called.
  • 7 异常处理函数运行时失败(Internal Exception Handler Run-Time Failure) - There was an uncaught exception, and the internal fatal exception handler function itself threw an error while attempting to handle it. This can happen, for example, if a process.on('uncaughtException') or domain.on('error') handler throws an error.
  • 8 - 未使用(Unused). In previous versions of Node, exit code 8 sometimes indicated an uncaught exception.
  • 9 - 无效的参数(Invalid Argument) - Either an unknown option was specified, or an option requiring a value was provided without a value.
  • 10 运行时失败(Internal JavaScript Run-Time Failure) - The JavaScript source code internal in Node's bootstrapping process threw an error when the bootstrapping function was called. This is extremely rare, and generally can only happen during development of Node itself.
  • 12 无效的调试参数(Invalid Debug Argument) - The --debug and/or --debug-brk options were set, but an invalid port number was chosen.
  • >128 信号退出(Signal Exits) - If Node receives a fatal signal such as SIGKILL or SIGHUP, then its exit code will be 128 plus the value of the signal code. This is a standard Unix practice, since exit codes are defined to be 7-bit integers, and signal exits set the high-order bit, and then contain the value of the signal code.

事件: 'exit'#

Emitted when the process is about to exit. This is a good hook to perform constant time checks of the module's state (like for unit tests). The main event loop will no longer be run after the 'exit' callback finishes, so timers may not be scheduled.

当进程将要退出时触发。这是一个在固定时间检查模块状态(如单元测试)的好时机。需要注意的是 'exit' 的回调结束后,主事件循环将不再运行,所以计时器也会失效。

Example of listening for exit:

监听 exit 事件的例子:

process.on('exit', function() {
  // 设置一个延迟执行
  setTimeout(function() {
    console.log('主事件循环已停止,所以不会执行');
  }, 0);
  console.log('退出前执行');
});

事件: 'uncaughtException'(未捕获错误)#

Emitted when an exception bubbles all the way back to the event loop. If a listener is added for this exception, the default action (which is to print a stack trace and exit) will not occur.

当一个异常冒泡回归到事件循环中就会触发这个事件,如果建立了一个监听器来监听这个异常,默认的行为(打印堆栈跟踪信息并退出)就不会发生。

Example of listening for uncaughtException:

监听 uncaughtException 示例:

// 故意制造一个异常,而且不catch捕获它.
nonexistentFunc();
console.log('This will not run.');

Note that uncaughtException is a very crude mechanism for exception handling.

注意,uncaughtException未捕获异常是一个非常粗略的异常处理。

Don't use it, use domains instead. If you do use it, restart your application after every unhandled exception!

尽量不要使用它,使用 domains 来代替它,如果你已经使用了,请在不处理这个异常之后重启你的应用。

Do not use it as the node.js equivalent of On Error Resume Next. An unhandled exception means your application - and by extension node.js itself - is in an undefined state. Blindly resuming means anything could happen.

不要 象使用node.js的有错误回复执行这样使用.一个未处理异常意味着你的应用和你的扩展Node.js自身是有未知状态的。盲目的恢复意味着任何事情都可能发生。

Think of resuming as pulling the power cord when you are upgrading your system. Nine out of ten times nothing happens - but the 10th time, your system is bust.

你在升级的系统时拉掉了电源线,然后恢复了。可能10次里有9次每一偶问题,但是第10次,你的系统就会崩溃。

You have been warned.

你已经被警告。

Signal Events#

Emitted when the processes receives a signal. See sigaction(2) for a list of standard POSIX signal names such as SIGINT, SIGUSR1, etc.

当进程接收到信号时触发。信号列表详见 POSIX 标准的 sigaction(2)如 SIGINT、SIGUSR1 等。

Example of listening for SIGINT:

监听 SIGINT 信号的示例:

// 设置 'SIGINT' 信号触发事件
process.on('SIGINT', function() {
  console.log('收到 SIGINT 信号。  退出请使用 Ctrl + D ');
});

An easy way to send the SIGINT signal is with Control-C in most terminal programs.

在大多数终端下,一个发送 SIGINT 信号的简单方法是按下 ctrl + c

process.stdout#

A Writable Stream to stdout.

一个指向标准输出流(stdout)可写的流(Writable Stream)

Example: the definition of console.log

举例: console.log 的实现

console.log = function(d) {
  process.stdout.write(d + '\n');
}; 

process.stderr and process.stdout are unlike other streams in Node in that writes to them are usually blocking. They are blocking in the case that they refer to regular files or TTY file descriptors. In the case they refer to pipes, they are non-blocking like other streams.

process.stderr 和 process.stdout 不像 Node 中其他的流(Streams) 那样,他们通常是阻塞式的写入。当其引用指向 普通文件 或者 TTY文件描述符 时他们就是阻塞的(注:TTY 可以理解为终端的一种,可联想 PuTTY,详见百科)。当他们引用指向管道(pipes)时,他们就同其他的流(Streams)一样是非阻塞的。

To check if Node is being run in a TTY context, read the isTTY property on process.stderr, process.stdout, or process.stdin:

要检查 Node 是否正在运行一个 TTY上下文 中(注:linux 中没有运行在 tty 下的进程是 守护进程 ),可以用使用 process.stderr、process.stdout 或 process.stdin 的 isTTY 属性:

$ node -p "Boolean(process.stdout.isTTY)"
true
$ node -p "Boolean(process.stdout.isTTY)" | cat
false 

See the tty docs for more information.

更多信息,请查看 tty 文档

process.stderr#

A writable stream to stderr.

一个指向标准错误流(stderr)的 可写的流(Writable Stream)。

process.stderr and process.stdout are unlike other streams in Node in that writes to them are usually blocking. They are blocking in the case that they refer to regular files or TTY file descriptors. In the case they refer to pipes, they are non-blocking like other streams.

process.stderr 和 process.stdout 不像 Node 中其他的流(Streams) 那样,他们通常是阻塞式的写入。当其引用指向 普通文件 或者 TTY文件描述符 时他们就是阻塞的(注:TTY 可以理解为终端的一种,可联想 PuTTY,详见百科)。当他们引用指向管道(pipes)时,他们就同其他的流(Streams)一样是非阻塞的。

process.stdin#

A Readable Stream for stdin. The stdin stream is paused by default, so one must call process.stdin.resume() to read from it.

一个指向 标准输入流(stdin) 的可读流(Readable Stream)。标准输入流默认是暂停 (pause) 的,所以必须要调用 process.stdin.resume() 来恢复 (resume) 接收。

Example of opening standard input and listening for both events:

打开标准输入流,并监听两个事件的示例:

process.stdin.on('end', function() {
  process.stdout.write('end');
});


// gets 函数的简单实现
function gets(cb){
  process.stdin.resume();
  process.stdin.setEncoding('utf8');

  process.stdin.on('data', function(chunk) {
     process.stdin.pause();
     cb(chunk);
  });
}

gets(function(reuslt){
  console.log("["+reuslt+"]");
});

process.argv#

An array containing the command line arguments. The first element will be 'node', the second element will be the name of the JavaScript file. The next elements will be any additional command line arguments.

一个包含命令行参数的数组。第一个元素会是 'node', 第二个元素将是 .Js 文件的名称。接下来的元素依次是命令行传入的参数。

// 打印 process.argv
process.argv.forEach(function(val, index, array) {
  console.log(index + ': ' + val);
});

This will generate:

输出将会是:

$ node process-2.js one two=three four
0: node
1: /Users/mjr/work/node/process-2.js
2: one
3: two=three
4: four 

process.execPath#

This is the absolute pathname of the executable that started the process.

开启当前进程的这个可执行文件的绝对路径。

Example:

示例:

/usr/local/bin/node 

process.execArgv#

This is the set of node-specific command line options from the executable that started the process. These options do not show up in process.argv, and do not include the node executable, the name of the script, or any options following the script name. These options are useful in order to spawn child processes with the same execution environment as the parent.

process.argv 类似,不过是用于保存 node特殊(node-specific) 的命令行选项(参数)。这些特殊的选项不会出现在 process.argv 中,而且 process.execArgv 不会保存 process.argv 中保存的参数(如 0:node 1:文件名 2.3.4.参数 等), 所有文件名之后的参数都会被忽视。这些选项可以用于派生与与父进程相同执行环境的子进程。

Example:

示例:

$ node --harmony script.js --version 

results in process.execArgv:

process.execArgv 中的特殊选项:

['--harmony'] 

and process.argv:

process.argv 接收到的参数:

['/usr/local/bin/node', 'script.js', '--version'] 

process.abort()#

This causes node to emit an abort. This will cause node to exit and generate a core file.

这将导致 Node 触发一个abort事件,这会导致Node退出并且创建一个核心文件。

process.chdir(directory)#

Changes the current working directory of the process or throws an exception if that fails.

改变进程的当前进程的工作目录,若操作失败则抛出异常。

console.log('当前目录:' + process.cwd());
try {
  process.chdir('/tmp');
  console.log('新目录:' + process.cwd());
}
catch (err) {
  console.log('chdir: ' + err);
}

process.cwd()#

Returns the current working directory of the process.

返回进程当前的工作目录。

console.log('当前目录:' + process.cwd());

process.env#

An object containing the user environment. See environ(7).

一个包括用户环境的对象。详细参见 environ(7)。

process.exit([code])#

Ends the process with the specified code. If omitted, exit uses the 'success' code 0.

终止当前进程并返回给定的 code。如果省略了 code,退出是会默认返回成功的状态码('success' code) 也就是 0

To exit with a 'failure' code:

退出并返回失败的状态 ('failure' code):

process.exit(1); 

The shell that executed node should see the exit code as 1.

执行上述代码,用来执行 node 的 shell 就能收到值为 1 的 exit code

process.exitCode#

A number which will be the process exit code, when the process either exits gracefully, or is exited via process.exit() without specifying a code.

当进程既正常退出,或者通过未指定 code 的 process.exit() 退出时,这个属性中所存储的数字将会成为进程退出的错误码 (exit code)。

Specifying a code to process.exit(code) will override any previous setting of process.exitCode.

如果指名了 process.exit(code) 中退出的错误码 (code),则会覆盖掉 process.exitCode 的设置。

process.getgid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Gets the group identity of the process. (See getgid(2).) This is the numerical group id, not the group name.

获取进程的群组标识(详见getgid(2))。获取到的是群组的数字ID,不是群组名称。

if (process.getgid) {
  console.log('当前 gid: ' + process.getgid());
}

process.setgid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Sets the group identity of the process. (See setgid(2).) This accepts either a numerical ID or a groupname string. If a groupname is specified, this method blocks while resolving it to a numerical ID.

设置进程的群组标识(详见getgid(2))。参数可以是一个数字ID或者群组名字符串。如果指定了一个群组名,这个方法会阻塞等待将群组名解析为数字ID。

if (process.getgid && process.setgid) {
  console.log('当前 gid: ' + process.getgid());
  try {
    process.setgid(501);
    console.log('新 gid: ' + process.getgid());
  }
  catch (err) {
    console.log('设置 gid 失败: ' + err);
  }
}

process.getuid()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Gets the user identity of the process. (See getuid(2).) This is the numerical userid, not the username.

获取执行进程的用户ID(详见getgid(2))。这是用户的数字ID,不是用户名。

if (process.getuid) {
  console.log('当前 uid: ' + process.getuid());
}

process.setuid(id)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Sets the user identity of the process. (See setuid(2).) This accepts either a numerical ID or a username string. If a username is specified, this method blocks while resolving it to a numerical ID.

设置执行进程的用户ID(详见getgid(2))。参数可以使一个数字ID或者用户名字符串。如果指定了一个用户名,那么该方法会阻塞等待将用户名解析为数字ID。

if (process.getuid && process.setuid) {
  console.log('当前 uid: ' + process.getuid());
  try {
    process.setuid(501);
    console.log('新 uid: ' + process.getuid());
  }
  catch (err) {
    console.log('设置 uid 失败: ' + err);
  }
}

process.getgroups()#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Returns an array with the supplementary group IDs. POSIX leaves it unspecified if the effective group ID is included but node.js ensures it always is.

返回一个保存补充组ID(supplementary group ID)的数组。POSIX 标准没有指名 如果有效组 ID(effective group ID)被包括在内的情况,而在 node.js 中则确保它始终是。(POSIX leaves it unspecified if the effective group ID is included but node.js ensures it always is. )

process.setgroups(groups)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Sets the supplementary group IDs. This is a privileged operation, meaning you need to be root or have the CAP_SETGID capability.

设置补充分组的ID标识. 这是一个特殊的操作, 意味着你必须拥有root或者CAP_SETGID权限才可以。(译者:CAP_SETGID表示设定程序允许普通用户使用setgid函数,这与文件的setgid权限位无关)

The list can contain group IDs, group names or both.

这个列表可以包括分组的ID表示,或分组名或两者都有。

process.initgroups(user, extra_group)#

Note: this function is only available on POSIX platforms (i.e. not Windows, Android)

注意: 该函数仅适用于遵循 POSIX 标准的系统平台如 Unix、Linux等 而 Windows、 Android 等则不适用。

Reads /etc/group and initializes the group access list, using all groups of which the user is a member. This is a privileged operation, meaning you need to be root or have the CAP_SETGID capability.

读取 /etc/group 并且初始化group分组访问列表,使用改成员所在的所有分组, 这是一个特殊的操作, 意味着你必须拥有root或者CAP_SETGID权限才可以。

user is a user name or user ID. extra_group is a group name or group ID.

user 是一个用户名或者用户ID. extra_group是分组的组名或者分组ID。

Some care needs to be taken when dropping privileges. Example:

有时候,当你在注销权限 (dropping privileges) 的时候需要注意。例如:

console.log(process.getgroups());         // [ 0 ]
process.initgroups('bnoordhuis', 1000);   // switch user
console.log(process.getgroups());         // [ 27, 30, 46, 1000, 0 ]
process.setgid(1000);                     // drop root gid
console.log(process.getgroups());         // [ 27, 30, 46, 1000 ]

process.version#

A compiled-in property that exposes NODE_VERSION.

一个暴露编译时存储版本信息的内置变量 NODE_VERSION 的属性。

console.log('版本: ' + process.version);

process.versions#

A property exposing version strings of node and its dependencies.

一个暴露存储 node 以及其依赖包 版本信息的属性。

console.log(process.versions); 

Will print something like:

输出:

{ http_parser: '1.0',
  node: '0.10.4',
  v8: '3.14.5.8',
  ares: '1.9.0-DEV',
  uv: '0.10.3',
  zlib: '1.2.3',
  modules: '11',
  openssl: '1.0.1e' }

process.config#

An Object containing the JavaScript representation of the configure options that were used to compile the current node executable. This is the same as the "config.gypi" file that was produced when running the ./configure script.

一个包含用来编译当前 node.exe 的配置选项的对象。内容与运行 ./configure 脚本生成的 "config.gypi" 文件相同。

An example of the possible output looks like:

最可能的输出示例如下:

{ target_defaults:
   { cflags: [],
     default_configuration: 'Release',
     defines: [],
     include_dirs: [],
     libraries: [] },
  variables:
   { host_arch: 'x64',
     node_install_npm: 'true',
     node_prefix: '',
     node_shared_cares: 'false',
     node_shared_http_parser: 'false',
     node_shared_libuv: 'false',
     node_shared_v8: 'false',
     node_shared_zlib: 'false',
     node_use_dtrace: 'false',
     node_use_openssl: 'true',
     node_shared_openssl: 'false',
     strict_aliasing: 'true',
     target_arch: 'x64',
     v8_use_snapshot: 'true' } }

process.kill(pid, [signal])#

Send a signal to a process. pid is the process id and signal is the string describing the signal to send. Signal names are strings like 'SIGINT' or 'SIGUSR1'. If omitted, the signal will be 'SIGTERM'. See kill(2) for more information.

向进程发送一个信号。 pid 是进程的 id 而 signal 则是描述信号的字符串名称。信号的名称都形似 'SIGINT' 或者 'SIGUSR1'。如果没有指定参数则会默认发送 'SIGTERM' 信号,更多信息请查看 kill(2) 。

Note that just because the name of this function is process.kill, it is really just a signal sender, like the kill system call. The signal sent may do something other than kill the target process.

值得注意的是,这个函数的名称虽然是 process.kill, 但就像 kill 系统调用(详见《Unix高级编程》)一样,它仅仅只是一个信号发送器。而信号的发送不仅仅只是用来杀死(kill)目标进程。

Example of sending a signal to yourself:

向当前进程发送信号的示例:

process.kill(process.pid, 'SIGHUP'); 

process.pid#

The PID of the process.

当前进程的 PID

console.log('当前进程 id: ' + process.pid);

process.title#

Getter/setter to set what is displayed in 'ps'.

获取/设置 (Getter/setter) 'ps' 中显示的进程名。

When used as a setter, the maximum length is platform-specific and probably short.

当设置该属性时,所能设置的字符串最大长度视具体平台而定,如果超过的话会自动截断。

On Linux and OS X, it's limited to the size of the binary name plus the length of the command line arguments because it overwrites the argv memory.

在 Linux 和 OS X 上,它受限于名称的字节长度加上命令行参数的长度,因为它有覆盖参数内存(argv memory)。

v0.8 allowed for longer process title strings by also overwriting the environ memory but that was potentially insecure/confusing in some (rather obscure) cases.

v0.8 版本允许更长的进程标题字符串,也支持覆盖环境内存,但是存在潜在的不安全和混乱(很难说清楚)。

process.arch#

What processor architecture you're running on: 'arm', 'ia32', or 'x64'.

返回当前 CPU 处理器的架构:'arm'、'ia32' 或者 'x64'.

console.log('当前CPU架构是:' + process.arch);

process.platform#

What platform you're running on: 'darwin', 'freebsd', 'linux', 'sunos' or 'win32'

返回当前程序运行的平台:'darwin', 'freebsd', 'linux', 'sunos' 或者 'win32'

console.log('当前系统平台是: ' + process.platform);

process.memoryUsage()#

Returns an object describing the memory usage of the Node process measured in bytes.

返回一个对象,它描述了Node进程的内存使用情况单位是bytes。

console.log(util.inspect(process.memoryUsage())); 

This will generate:

输出将会是:

{ rss: 4935680,
  heapTotal: 1826816,
  heapUsed: 650472 } 

heapTotal and heapUsed refer to V8's memory usage.

heapTotalheapUsed 是根据 V8引擎的内存使用情况来的

process.nextTick(callback)#

  • callback Function

  • callback Function

Once the current event loop turn runs to completion, call the callback function.

在事件循环的下一次循环中调用 callback 回调函数。

This is not a simple alias to setTimeout(fn, 0), it's much more efficient. It runs before any additional I/O events (including timers) fire in subsequent ticks of the event loop.

不是 setTimeout(fn, 0) 函数的一个简单别名,因为它的效率高多了。该函数能在任何 I/O 事前之前调用我们的回调函数。但是这个函数在层次超过某个限制的时候,也会出现瑕疵,详细见 process.maxTickDepth

console.log('开始');
process.nextTick(function() {
  console.log('nextTick 回调');
});
console.log('已设定');
// 输出:
// 开始
// 已设定
// nextTick 回调

This is important in developing APIs where you want to give the user the chance to assign event handlers after an object has been constructed, but before any I/O has occurred.

如果你想要在【对象创建】之后而【I/O 操作】发生之前执行某些操作,那么这个函数对你而言就十分重要了。

// thing.startDoingStuff() 现在被调用了, 而不是之前.

It is very important for APIs to be either 100% synchronous or 100% asynchronous. Consider this example:

【注意!!】保证你的函数一定是同步执行或者一定是异步执行,这非常重要!!参考如下的例子:

  fs.stat('file', cb);
} 

This API is hazardous. If you do this:

这样执行是很危险。如果你还不清楚上述行为的危害请看下面的例子:

maybeSync(true, function() {
  foo();
});
bar(); 

then it's not clear whether foo() or bar() will be called first.

那么,使用刚才那个不知道是同步还是异步的操作,在编程的时候你就会发现,你不能确定到底是 foo() 先执行,还是 bar() 先执行。

This approach is much better:

用下面的方法就可以更好的解决:

  fs.stat('file', cb);
} 

Note: the nextTick queue is completely drained on each pass of the event loop before additional I/O is processed. As a result, recursively setting nextTick callbacks will block any I/O from happening, just like a while(true); loop.

注意:nextTick 的队列会在完全执行完毕之后才调用 I/O 操作 (the nextTick queue is completely drained on each pass of the event loop before additional I/O is processed.) 。因此,递归设置 nextTick 的回调就像一个 while(true) ; 循环一样,将会阻止任何 I/O 操作的发生。

process.umask([mask])#

Sets or reads the process's file mode creation mask. Child processes inherit the mask from the parent process. Returns the old mask if mask argument is given, otherwise returns the current mask.

设置或者读取进程的文件模式的创建掩码。子进程从父进程中继承这个掩码。如果设定了参数 mask 那么返回旧的掩码,否则返回当前的掩码。

oldmask = process.umask(newmask);
console.log('原掩码: ' + oldmask.toString(8) + '\n'
            '新掩码: ' + newmask.toString(8));

process.uptime()#

Number of seconds Node has been running.

返回 Node 程序已运行的秒数。

process.hrtime()#

Returns the current high-resolution real time in a [seconds, nanoseconds] tuple Array. It is relative to an arbitrary time in the past. It is not related to the time of day and therefore not subject to clock drift. The primary use is for measuring performance between intervals.

返回当前的高分辨时间,形式为 [秒,纳秒] 的元组数组。它是相对于在过去的任意时间。该值与日期无关,因此不受时钟漂移的影响。主要用途是可以通过精确的时间间隔,来衡量程序的性能。

You may pass in the result of a previous call to process.hrtime() to get a diff reading, useful for benchmarks and measuring intervals:

你可以将前一个 process.hrtime() 的结果传递给当前的 process.hrtime() 函数,结果会返回一个比较值,用于基准和衡量时间间隔。

  console.log('基准相差 %d 纳秒', diff[0] * 1e9 + diff[1]);
  // 基准相差 1000000527 纳秒
}, 1000);

utils#

稳定度: 4 - 冻结

These functions are in the module 'util'. Use require('util') to access them.

如果你想使用模块 'util'中已定义的方法. 只需 require('util') 即可使用.

The util module is primarily designed to support the needs of Node's internal APIs. Many of these utilities are useful for your own programs. If you find that these functions are lacking for your purposes, however, you are encouraged to write your own utilities. We are not interested in any future additions to the util module that are unnecessary for Node's internal functionality.

util模块设计的主要目的是为了满足Node内部API的需求 。这个模块中的很多方法在你编写Node程序的时候都是很有帮助的。如果你觉得提供的这些方法满足不了你的需求,那么我们鼓励你编写自己的实用工具方法。我们 不希望util模块中添加任何对于Node的内部功能非必要的扩展。

util.debuglog(section)#

  • section String The section of the program to be debugged
  • Returns: Function The logging function

  • section String 被调试的程序节点部分

  • 返回值: Function 日志处理函数

This is used to create a function which conditionally writes to stderr based on the existence of a NODE_DEBUG environment variable. If the section name appears in that environment variable, then the returned function will be similar to console.error(). If not, then the returned function is a no-op.

这个方法是在存在NODE_DEBUG环境变量的基础上,创建一个有条件写到stderr里的函数。如果“节点”的名字出现在这个环境变量里,那么就返回一个功能类似于console.error()的函数.如果不是,那么返回一个空函数.

For example:

例如:

var bar = 123; debuglog('hello from foo [%d]', bar);


If this program is run with `NODE_DEBUG=foo` in the environment, then
it will output something like:

如果这个程序以`NODE_DEBUG=foo` 的环境运行,那么它将会输出:

    FOO 3245: hello from foo [123]

where `3245` is the process id.  If it is not run with that
environment variable set, then it will not print anything.

`3245`是进程的ID, 如果程序不以刚才那样设置的环境变量运行,那么将不会输出任何东西。

You may separate multiple `NODE_DEBUG` environment variables with a
comma.  For example, `NODE_DEBUG=fs,net,tls`.

多个`NODE_DEBUG`环境变量,你可以用逗号进行分割。例如,`NODE_DEBUG= fs, net, tls`。

## util.format(format, [...])

Returns a formatted string using the first argument as a `printf`-like format.

根据第一个参数,返回一个格式化字符串,类似`printf`的格式化输出。

The first argument is a string that contains zero or more *placeholders*.
Each placeholder is replaced with the converted value from its corresponding
argument. Supported placeholders are:

第一个参数是一个字符串,包含零个或多个*占位符*。
每一个占位符被替换为与其对应的转换后的值。
支持的占位符有:

* `%s` - String.
* `%d` - Number (both integer and float).
* `%j` - JSON.  Replaced with the string `'[Circular]'` if the argument
         contains circular references.
* `%%` - single percent sign (`'%'`). This does not consume an argument.

* `%s` - 字符串.
* `%d` - 数字 (整型和浮点型).
* `%j` - JSON. 如果这个参数包含循环对象的引用,将会被替换成字符串 `'[Circular]'`。
* `%%` - 单独一个百分号(`'%'`)。不会消耗一个参数。

If the placeholder does not have a corresponding argument, the placeholder is
not replaced.

如果占位符没有相对应的参数,占位符将不会被替换。

    util.format('%s:%s', 'foo'); // 'foo:%s'

If there are more arguments than placeholders, the extra arguments are
converted to strings with `util.inspect()` and these strings are concatenated,
delimited by a space.

如果有多个参数占位符,额外的参数将会调用`util.inspect()`转换为字符串。这些字符串被连接在一起,并且以空格分隔。

    util.format('%s:%s', 'foo', 'bar', 'baz'); // 'foo:bar baz'

If the first argument is not a format string then `util.format()` returns
a string that is the concatenation of all its arguments separated by spaces.
Each argument is converted to a string with `util.inspect()`.

如果第一个参数是一个非格式化字符串,那么`util.format()`将会把所有的参数转成字符串,以空格隔开,拼接在一块,并返回该字符串。`util.inspect()`会把每个参数都转成一个字符串。

    util.format(1, 2, 3); // '1 2 3'

## util.log(string)

Output with timestamp on `stdout`.

在控制台进行输出,并带有时间戳。

    示例:require('util').log('Timestamped message.');

## util.inspect(object, [options])

Return a string representation of `object`, which is useful for debugging.

返回一个对象的字符串表现形式, 在代码调试的时候非常有用.

An optional *options* object may be passed that alters certain aspects of the
formatted string:

可以通过加入一些可选选项,来改变对象的格式化输出形式:

 - `showHidden` - if `true` then the object's non-enumerable properties will be
   shown too. Defaults to `false`.

 - `showHidden` - 如果设为 `true`,那么该对象的不可枚举的属性将会被显示出来。默认为`false`.

 - `depth` - tells `inspect` how many times to recurse while formatting the
   object. This is useful for inspecting large complicated objects. Defaults to
   `2`. To make it recurse indefinitely pass `null`.

 - `depth` - 告诉 `inspect` 格式化对象的时候递归多少次。这个选项在格式化复杂对象的时候比较有用。 默认为
   `2`。如果想无穷递归下去,则赋值为`null`即可。

 - `colors` - if `true`, then the output will be styled with ANSI color codes.
   Defaults to `false`. Colors are customizable, see below.

 - `colors` - 如果设为`true`,将会以`ANSI`颜色代码风格进行输出.
   默认是`false`。颜色是可定制的,请看下面:

 - `customInspect` - if `false`, then custom `inspect(depth, opts)` functions
   defined on the objects being inspected won't be called. Defaults to `true`.

 - `customInspect` - 如果设为 `false`,那么定义在被检查对象上的`inspect(depth, opts)` 方法将不会被调用。 默认为`true`。

Example of inspecting all properties of the `util` object:

示例:检查`util`对象上的所有属性

    console.log(util.inspect(util, { showHidden: true, depth: null }));

Values may supply their own custom `inspect(depth, opts)` functions, when
called they receive the current depth in the recursive inspection, as well as
the options object passed to `util.inspect()`.

当被调用的时候,参数值可以提供自己的自定义`inspect(depth, opts)`方法。该方法会接收当前的递归检查深度,以及传入`util.inspect()`的其他参数。

### 自定义 `util.inspect` 颜色

<!-- type=misc -->

Color output (if enabled) of `util.inspect` is customizable globally
via `util.inspect.styles` and `util.inspect.colors` objects.

`util.inspect`彩色输出(如果启用的话) ,可以通过`util.inspect.styles` 和 `util.inspect.colors` 来全局定义。

`util.inspect.styles` is a map assigning each style a color
from `util.inspect.colors`.
Highlighted styles and their default values are:
 * `number` (yellow)
 * `boolean` (yellow)
 * `string` (green)
 * `date` (magenta)
 * `regexp` (red)
 * `null` (bold)
 * `undefined` (grey)
 * `special` - only function at this time (cyan)
 * `name` (intentionally no styling)

`util.inspect.styles`是通过`util.inspect.colors`分配给每个风格颜色的一个映射。
高亮风格和它们的默认值:
 * `number` (黄色)
 * `boolean` (黄色)
 * `string` (绿色)
 * `date` (洋红色)
 * `regexp` (红色)
 * `null` (粗体)
 * `undefined` (灰色)
 * `special` - 在这个时候的唯一方法 (青绿色)
 * `name` (无风格)

Predefined color codes are: `white`, `grey`, `black`, `blue`, `cyan`, 
`green`, `magenta`, `red` and `yellow`.
There are also `bold`, `italic`, `underline` and `inverse` codes.

预定义的颜色代码: `white`, `grey`, `black`, `blue`, `cyan`, 
`green`, `magenta`, `red` 和 `yellow`。
还有 `bold`, `italic`, `underline` 和 `inverse` 代码。

### 自定义对象的`inspect()`方法

<!-- type=misc -->

Objects also may define their own `inspect(depth)` function which `util.inspect()`
will invoke and use the result of when inspecting the object:

对象可以定义自己的 `inspect(depth)`方法;当使用`util.inspect()`检查该对象的时候,将会执行对象自定义的检查方法。

    util.inspect(obj);
      // "{nate}"

You may also return another Object entirely, and the returned String will be
formatted according to the returned Object. This is similar to how
`JSON.stringify()` works:

您也可以返回完全不同的另一个对象,而且返回的字符串将被根据返回的对象格式化。它和`JSON.stringify()`工作原理类似:

    util.inspect(obj);
      // "{ bar: 'baz' }"

## util.isArray(object)

Returns `true` if the given "object" is an `Array`. `false` otherwise.

如果给定的对象是`数组`类型,就返回`true`,否则返回`false`

    util.isArray([])
      // true
    util.isArray(new Array)
      // true
    util.isArray({})
      // false

## util.isRegExp(object)

Returns `true` if the given "object" is a `RegExp`. `false` otherwise.

如果给定的对象是`RegExp`类型,就返回`true`,否则返回`false`。

    util.isRegExp(/some regexp/)
      // true
    util.isRegExp(new RegExp('another regexp'))
      // true
    util.isRegExp({})
      // false

## util.isDate(object)

Returns `true` if the given "object" is a `Date`. `false` otherwise.

如果给定的对象是`Date`类型,就返回`true`,否则返回`false`。

    util.isDate(new Date())
      // true
    util.isDate(Date())
      // false (没有关键字 'new' 返回一个字符串)
    util.isDate({})
      // false

## util.isError(object)

Returns `true` if the given "object" is an `Error`. `false` otherwise.

如果给定的对象是`Error`类型,就返回`true`,否则返回`false`。

    util.isError(new Error())
      // true
    util.isError(new TypeError())
      // true
    util.isError({ name: 'Error', message: 'an error occurred' })
      // false

## util.inherits(constructor, superConstructor)

Inherit the prototype methods from one
[constructor](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/constructor)
into another.  The prototype of `constructor` will be set to a new
object created from `superConstructor`.

通过[构造函数](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Object/constructor),继承原型对象上的方法。构造函数的`原型`将被设置为一个新的
从`超类`创建的对象。

As an additional convenience, `superConstructor` will be accessible
through the `constructor.super_` property.

你可以很方便的通过 `constructor.super_`来访问到`superConstructor` 

    stream.on("data", function(data) {
        console.log('Received data: "' + data + '"');
    })
    stream.write("It works!"); // 输出结果:Received data: "It works!"

## util.debug(string)

    稳定度: 0 - 已过时: 请使用 console.error() 代替

Deprecated predecessor of `console.error`.

`console.error`的已过时的前身

## util.error([...])

    稳定度: 0 - 已过时: 请使用 console.error() 代替

Deprecated predecessor of `console.error`.

`console.error`的已过时的前身

## util.puts([...])

   稳定度: 0 - 已过时: 请使用 console.log() 代替

Deprecated predecessor of `console.log`.

`console.log`的已过时的前身

## util.print([...])

   稳定度: 0 - 已过时: 请使用 console.log() 代替

Deprecated predecessor of `console.log`.

`console.log`的已过时的前身

## util.pump(readableStream, writableStream, [callback])

   稳定度: 0 - 已过时: 请使用readableStream.pipe(writableStream)代替

Deprecated predecessor of `stream.pipe()`.


`stream.pipe()`的已过时的前身

# 事件 (Events)

    稳定度: 4 - 冻结

<!--type=module-->

Many objects in Node emit events: a `net.Server` emits an event each time
a peer connects to it, a `fs.readStream` emits an event when the file is
opened. All objects which emit events are instances of `events.EventEmitter`.
You can access this module by doing: `require("events");`

Node里面的许多对象都会分发事件:一个`net.Server`对象会在每次有新连接时分发一个事件, 一个`fs.readStream`对象会在文件被打开的时候发出一个事件。
所有这些产生事件的对象都是 `events.EventEmitter` 的实例。
你可以通过`require("events");`来访问该模块

Typically, event names are represented by a camel-cased string, however,
there aren't any strict restrictions on that, as any string will be accepted.

通常,事件名是驼峰命名 (camel-cased) 的字符串。不过也没有强制的要求,任何字符串都是可以使用的。

Functions can then be attached to objects, to be executed when an event
is emitted. These functions are called _listeners_. Inside a listener
function, `this` refers to the `EventEmitter` that the listener was
attached to.

为了处理发出的事件,我们将函数 (Function) 关联到对象上。
我们把这些函数称为 _监听器 (listeners)_。
在监听函数中 `this` 指向当前监听函数所关联的 `EventEmitter` 对象。

## 类: events.EventEmitter

To access the EventEmitter class, `require('events').EventEmitter`.

通过 `require('events').EventEmitter` 获取 EventEmitter 类。

When an `EventEmitter` instance experiences an error, the typical action is
to emit an `'error'` event.  Error events are treated as a special case in node.
If there is no listener for it, then the default action is to print a stack
trace and exit the program.

当 `EventEmitter` 实例遇到错误,通常的处理方法是产生一个 `'error'` 事件,node 对错误事件做特殊处理。
如果程序没有监听错误事件,程序会按照默认行为在打印出 栈追踪信息 (stack trace) 后退出。

All EventEmitters emit the event `'newListener'` when new listeners are
added and `'removeListener'` when a listener is removed.

EventEmitter 会在添加 listener 时触发 `'newListener'` 事件,删除 listener 时触发 `'removeListener'` 事件

### emitter.addListener(event, listener)
### emitter.on(event, listener)

### emitter.addListener(event, listener)
### emitter.on(event, listener)

Adds a listener to the end of the listeners array for the specified event.

添加一个 listener 至特定事件的 listener 数组尾部。

    server.on('connection', function (stream) {
      console.log('someone connected!');
    });

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.once(event, listener)

Adds a **one time** listener for the event. This listener is
invoked only the next time the event is fired, after which
it is removed.

添加一个 **一次性** listener,这个 listener 只会在下一次事件发生时被触发一次,触发完成后就被删除。

    server.once('connection', function (stream) {
      console.log('Ah, we have our first user!');
    });

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.removeListener(event, listener)

Remove a listener from the listener array for the specified event.
**Caution**: changes array indices in the listener array behind the listener.

从一个事件的 listener 数组中删除一个 listener
**注意**:此操作会改变 listener 数组中当前 listener 后的所有 listener 在的下标

    var callback = function(stream) {
      console.log('someone connected!');
    };
    server.on('connection', callback);
    // ...
    server.removeListener('connection', callback);

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.removeAllListeners([event])

Removes all listeners, or those of the specified event.

删除所有 listener,或者删除某些事件 (event) 的 listener

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### emitter.setMaxListeners(n)

By default EventEmitters will print a warning if more than 10 listeners are
added for a particular event. This is a useful default which helps finding
memory leaks. Obviously not all Emitters should be limited to 10. This function
allows that to be increased. Set to zero for unlimited.

在默认情况下,EventEmitter 会在多于 10 个 listener 监听某个事件的时候出现警告,此限制在寻找内存泄露时非常有用。
显然,也不是所有的 Emitter 事件都要被限制在 10 个 listener 以下,在这种情况下可以使用这个函数来改变这个限制。

Returns emitter, so calls can be chained.

返回 emitter,方便链式调用。

### EventEmitter.defaultMaxListeners

`emitter.setMaxListeners(n)` sets the maximum on a per-instance basis.
This class property lets you set it for *all* `EventEmitter` instances,
current and future, effective immediately. Use with care.

`emitter.setMaxListeners(n)` 设置每个 emitter 实例的最大监听数。
这个类属性为 **所有** `EventEmitter` 实例设置最大监听数(对所有已创建的实例和今后创建的实例都将立即生效)。
使用时请注意。

Note that `emitter.setMaxListeners(n)` still has precedence over
`EventEmitter.defaultMaxListeners`.

请注意,`emitter.setMaxListeners(n)` 优先于 `EventEmitter.defaultMaxListeners`。

### emitter.listeners(event)

Returns an array of listeners for the specified event.

返回指定事件的 listener 数组

    server.on('connection', function (stream) {
      console.log('someone connected!');
    });
    console.log(util.inspect(server.listeners('connection'))); // [ [Function] ]

### emitter.emit(event, [arg1], [arg2], [...])

Execute each of the listeners in order with the supplied arguments.

使用提供的参数按顺序执行指定事件的 listener

Returns `true` if event had listeners, `false` otherwise.

若事件有 listener 则返回 `true` 否则返回 `false`。

### 类方法: EventEmitter.listenerCount(emitter, event)

Return the number of listeners for a given event.

返回指定事件的 listener 数量

### 事件: 'newListener'

* `event` {String} The event name
* `listener` {Function} The event handler function

* `event` {String} 事件名
* `listener` {Function} 事件处理函数

This event is emitted any time someone adds a new listener.  It is unspecified
if `listener` is in the list returned by `emitter.listeners(event)`.

在添加 listener 时会发生该事件。
此时无法确定 `listener` 是否在 `emitter.listeners(event)` 返回的列表中。

### 事件: 'removeListener'

* `event` {String} The event name
* `listener` {Function} The event handler function

* `event` {String} 事件名
* `listener` {Function} 事件处理函数

This event is emitted any time someone removes a listener.  It is unspecified
if `listener` is in the list returned by `emitter.listeners(event)`.


在移除 listener 时会发生该事件。
此时无法确定 `listener` 是否在 `emitter.listeners(event)` 返回的列表中。

# 域

    稳定度: 2 - 不稳定

Domains provide a way to handle multiple different IO operations as a
single group.  If any of the event emitters or callbacks registered to a
domain emit an `error` event, or throw an error, then the domain object
will be notified, rather than losing the context of the error in the
`process.on('uncaughtException')` handler, or causing the program to
exit immediately with an error code.

Domains 提供了一种方式,即以一个单一的组的形式来处理多个不同的IO操作。如果任何一个注册到domain的事件触发器或回调触发了一个‘error’事件,或者抛出一个错误,那么domain对象将会被通知到。而不是直接让这个错误的上下文从`process.on('uncaughtException')'处理程序中丢失掉,也不会致使程序因为这个错误伴随着错误码立即退出。

## 警告: 不要忽视错误!

<!-- type=misc -->

Domain error handlers are not a substitute for closing down your
process when an error occurs.

Domain error处理程序不是一个在错误发生时,关闭你的进程的替代品

By the very nature of how `throw` works in JavaScript, there is almost
never any way to safely "pick up where you left off", without leaking
references, or creating some other sort of undefined brittle state.

基于'抛出(throw)'在JavaScript中工作的方式,几乎从来没有任何方式能够在‘不泄露引用,不造成一些其他种类的未定义的脆弱状态’的前提下,安全的“从你离开的地方重新拾起(pick up where you left off)”,

The safest way to respond to a thrown error is to shut down the
process.  Of course, in a normal web server, you might have many
connections open, and it is not reasonable to abruptly shut those down
because an error was triggered by someone else.

响应一个被抛出错误的最安全方式就是关闭进程。当然,在一个正常的Web服务器中,你可能会有很多活跃的连接。由于其他触发的错误你去突然关闭这些连接是不合理。

The better approach is send an error response to the request that
triggered the error, while letting the others finish in their normal
time, and stop listening for new requests in that worker.

更好的方法是发送错误响应给那个触发错误的请求,在保证其他人正常完成工作时,停止监听那个触发错误的人的新请求。

In this way, `domain` usage goes hand-in-hand with the cluster module,
since the master process can fork a new worker when a worker
encounters an error.  For node programs that scale to multiple
machines, the terminating proxy or service registry can take note of
the failure, and react accordingly.

在这种方式中,`域`使用伴随着集群模块,由于主过程可以叉新工人时,一个工人发生了一个错误。节点程序规模的多
机,终止代理或服务注册可以注意一下失败,并做出相应的反应。

For example, this is not a good idea:

举例来说,以下就不是一个好想法:

var d = require('domain').create();
d.on('error', function(er) {
  // 这个错误不会导致进程崩溃,但是情况会更糟糕!
  // 虽然我们阻止了进程突然重启动,但是我们已经发生了资源泄露
  // 这种事情的发生会让我们发疯。
  // 不如调用 process.on('uncaughtException')!
  console.log('error, but oh well', er.message);
});
d.run(function() {
  require('http').createServer(function(req, res) {
    handleRequest(req, res);
  }).listen(PORT);
});

By using the context of a domain, and the resilience of separating our program into multiple worker processes, we can react more appropriately, and handle errors with much greater safety.

通过对域的上下文的使用,以及将我们的程序分隔成多个工作进程的反射,我们可以做出更加恰当的反应和更加安全的处理。

// 好一些的做法!

var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;

var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;

if (cluster.isMaster) {
  // In real life, you'd probably use more than just 2 workers,
  // and perhaps not put the master and worker in the same file.
  //
  // You can also of course get a bit fancier about logging, and
  // implement whatever custom logic you need to prevent DoS
  // attacks and other bad behavior.
  //
  // See the options in the cluster documentation.
  //
  // The important thing is that the master does very little,
  // increasing our resilience to unexpected errors.

if (cluster.isMaster) {
  // 在工作环境中,你可能会使用到不止一个工作分支
  // 而且可能不会把主干和分支放在同一个文件中
  //
  //你当然可以通过日志进行猜测,并且对你需要防止的DoS攻击等不良行为实施自定义的逻辑
  //
  // 看集群文件的选项
  //
  // 最重要的是主干非常小,增加了我们抵抗以外错误的可能性。

  cluster.fork();
  cluster.fork();

  cluster.fork();
  cluster.fork();

  cluster.on('disconnect', function(worker) {
    console.error('disconnect!');
    cluster.fork();
  });

  cluster.on('disconnect', function(worker) {
    console.error('disconnect!');
    cluster.fork();
  });

} else {
  // the worker
  //
  // This is where we put our bugs!

} else {
  // 工作进程
  //
  // 这是我们出错的地方

  var domain = require('domain');

  var domain = require('domain');

  // See the cluster documentation for more details about using
  // worker processes to serve requests.  How it works, caveats, etc.

  //看集群文件对于使用工作进程处理请求的更多细节,它是如何工作的,它的警告等等。

  var server = require('http').createServer(function(req, res) {
    var d = domain.create();
    d.on('error', function(er) {
      console.error('error', er.stack);

  var server = require('http').createServer(function(req, res) {
    var d = domain.create();
    d.on('error', function(er) {
      console.error('error', er.stack);

    // 因为req和res在这个域存在之前就被创建,
    // 所以我们需要显式添加它们。
    // 详见下面关于显式和隐式绑定的解释。
    d.add(req);
    d.add(res);

    // Now run the handler function in the domain.
    d.run(function() {
      handleRequest(req, res);
    });
  });
  server.listen(PORT);
}

    // 现在在域里面运行处理器函数。
    d.run(function() {
      handleRequest(req, res);
    });
  });
  server.listen(PORT);
}

    // 这个部分不是很重要。只是一个简单的路由例子。
    // 你会想把你的超级给力的应用逻辑放在这里。
    function handleRequest(req, res) {
      switch(req.url) {
        case '/error':
          // 我们干了一些异步的东西,然后。。。
          setTimeout(function() {
            // 呃。。。
            flerb.bark();
          });
          break;
        default:
          res.end('ok');
      }
    }

对Error(错误)对象的内容添加#

Any time an Error object is routed through a domain, a few extra fields are added to it.

每一次一个Error对象被导向经过一个域,它会添加几个新的字段。

  • error.domain The domain that first handled the error.
  • error.domainEmitter The event emitter that emitted an 'error' event with the error object.
  • error.domainBound The callback function which was bound to the domain, and passed an error as its first argument.
  • error.domainThrown A boolean indicating whether the error was thrown, emitted, or passed to a bound callback function.

  • error.domain 第一个处理这个错误的域。

  • error.domainEmitter 用这个错误对象触发'error'事件的事件分发器。
  • error.domainBound 回调函数,该回调函数被绑定到域,并且一个错误会作为第一参数传递给这个回调函数。
  • error.domainThrown 一个布尔值表明这个错误是否被抛出,分发或者传递给一个绑定的回调函数。

隐式绑定#

If domains are in use, then all new EventEmitter objects (including Stream objects, requests, responses, etc.) will be implicitly bound to the active domain at the time of their creation.

如果多个域正在被使用,那么所有的EventEmitter对象(包括Stream对象,请求,应答等等)会被隐式绑定到它们被创建时的有效域。

Additionally, callbacks passed to lowlevel event loop requests (such as to fs.open, or other callback-taking methods) will automatically be bound to the active domain. If they throw, then the domain will catch the error.

而且,被传递到低层事件分发请求的回调函数(例如fs.open,或者其它接受回调函数的函数)会自动绑定到有效域。如果这些回调函数抛出错误,那么这个域会捕捉到这个错误。

In order to prevent excessive memory usage, Domain objects themselves are not implicitly added as children of the active domain. If they were, then it would be too easy to prevent request and response objects from being properly garbage collected.

为了防止内存的过度使用,Domain对象自己不会作为有效域的子对象被隐式添加到有效域。因为如果这样做的话,会很容易影响到请求和应答对象的正常垃圾回收。

If you want to nest Domain objects as children of a parent Domain, then you must explicitly add them.

如果你在一个父Domain对象里嵌套子Domain对象,那么你需要显式地添加它们。

Implicit binding routes thrown errors and 'error' events to the Domain's error event, but does not register the EventEmitter on the Domain, so domain.dispose() will not shut down the EventEmitter. Implicit binding only takes care of thrown errors and 'error' events.

隐式绑定将被抛出的错误和'error'事件导向到Domain对象的error事件,但不会注册到Domain对象上的EventEmitter对象,所以domain.dispose()不会令EventEmitter对象停止运作。隐式绑定只关心被抛出的错误和 'error'事件。

显式绑定#

Sometimes, the domain in use is not the one that ought to be used for a specific event emitter. Or, the event emitter could have been created in the context of one domain, but ought to instead be bound to some other domain.

有时,正在使用的域并不是某个事件分发器所应属的域。又或者,事件分发器在一个域内被创建,但是应该被绑定到另一个域。

For example, there could be one domain in use for an HTTP server, but perhaps we would like to have a separate domain to use for each request.

例如,对于一个HTTP服务器,可以有一个正在使用的域,但我们可能希望对每一个请求使用一个不同的域。

That is possible via explicit binding.

这可以通过显示绑定来达到。

For example:

例如:

serverDomain.run(function() {
  // 服务器在serverDomain的作用域内被创建
  http.createServer(function(req, res) {
    // req和res同样在serverDomain的作用域内被创建
    // 但是,我们想对于每一个请求使用一个不一样的域。
    // 所以我们首先创建一个域,然后将req和res添加到这个域上。
    var reqd = domain.create();
    reqd.add(req);
    reqd.add(res);
    reqd.on('error', function(er) {
      console.error('Error', er, req.url);
      try {
        res.writeHead(500);
        res.end('Error occurred, sorry.');
      } catch (er) {
        console.error('Error sending 500', er, req.url);
      }
    });
  }).listen(1337);    
});
```

domain.create()#

  • return: Domain

  • return: Domain

Returns a new Domain object.

返回一个新的Domain对象。

类: Domain#

The Domain class encapsulates the functionality of routing errors and uncaught exceptions to the active Domain object.

Domain类封装了将错误和没有被捕捉的异常导向到有效对象的功能。

Domain is a child class of EventEmitter. To handle the errors that it catches, listen to its error event.

Domain是 EventEmitter类的一个子类。监听它的error事件来处理它捕捉到的错误。

domain.run(fn)#

  • fn Function

  • fn Function

Run the supplied function in the context of the domain, implicitly binding all event emitters, timers, and lowlevel requests that are created in that context.

在域的上下文里运行提供的函数,隐式地绑定所有该上下文里创建的事件分发器,计时器和低层请求。

This is the most basic way to use a domain.

这是使用一个域的最基本的方式。

Example:

示例:

var d = domain.create();
d.on('error', function(er) {
  console.error('Caught error!', er);
});
d.run(function() {
  process.nextTick(function() {
    setTimeout(function() { // 模拟几个不同的异步的东西
      fs.open('non-existent file', 'r', function(er, fd) {
        if (er) throw er;
        // 继续。。。
      });
    }, 100);
  });
});

In this example, the d.on('error') handler will be triggered, rather than crashing the program.

在这个例子里, d.on('error') 处理器会被触发,而不是导致程序崩溃。

domain.members#

  • Array

  • Array

An array of timers and event emitters that have been explicitly added to the domain.

一个数组,里面的元素是被显式添加到域里的计时器和事件分发器。

domain.add(emitter)#

  • emitter EventEmitter | Timer emitter or timer to be added to the domain

  • emitter EventEmitter | Timer 被添加到域里的时间分发器或计时器

Explicitly adds an emitter to the domain. If any event handlers called by the emitter throw an error, or if the emitter emits an error event, it will be routed to the domain's error event, just like with implicit binding.

显式地将一个分发器添加到域。如果这个分发器调用的任意一个事件处理器抛出一个错误,或是这个分发器分发了一个error事,那么它会被导向到这个域的error事件,就像隐式绑定所做的一样。

This also works with timers that are returned from setInterval and setTimeout. If their callback function throws, it will be caught by the domain 'error' handler.

这对于从setIntervalsetTimeout返回的计时器同样适用。如果这些计时器的回调函数抛出错误,它将会被这个域的error处理器捕捉到。

If the Timer or EventEmitter was already bound to a domain, it is removed from that one, and bound to this one instead.

如果这个Timer或EventEmitter对象已经被绑定到另外一个域,那么它将会从那个域被移除,然后绑定到当前的域。

domain.remove(emitter)#

  • emitter EventEmitter | Timer emitter or timer to be removed from the domain

  • emitter EventEmitter | Timer 要从域里被移除的分发器或计时器

The opposite of domain.add(emitter). Removes domain handling from the specified emitter.

domain.add(emitter)函数恰恰相反,这个函数将域处理从指明的分发器里移除。

domain.bind(callback)#

  • callback Function The callback function
  • return: Function The bound function

  • callback Function 回调函数

  • return: Function 被绑定的函数

The returned function will be a wrapper around the supplied callback function. When the returned function is called, any errors that are thrown will be routed to the domain's error event.

返回的函数会是一个对于所提供的回调函数的包装函数。当这个被返回的函数被调用时,所有被抛出的错误都会被导向到这个域的error事件。

例子#

d.on('error', function(er) {
  // 有个地方发生了一个错误。
  // 如果我们现在抛出这个错误,它会让整个程序崩溃
  // 并给出行号和栈信息。
});

domain.intercept(callback)#

  • callback Function The callback function
  • return: Function The intercepted function

  • callback Function 回调函数

  • return: Function 被拦截的函数

This method is almost identical to domain.bind(callback). However, in addition to catching thrown errors, it will also intercept Error objects sent as the first argument to the function.

这个函数与domain.bind(callback)几乎一模一样。但是,除了捕捉被抛出的错误外,它还会拦截作为第一参数被传递到这个函数的Error对象。

In this way, the common if (er) return callback(er); pattern can be replaced with a single error handler in a single place.

在这种方式下,常见的'if(er) return callback(er);'的方式可以被一个单独地方的单独的错误处理所取代。

例子#

d.on('error', function(er) {
  // 有个地方发生了一个错误。
  // 如果我们现在抛出这个错误,它会让整个程序崩溃
  // 并给出行号和栈信息。
});

domain.enter()#

The enter method is plumbing used by the run, bind, and intercept methods to set the active domain. It sets domain.active and process.domain to the domain, and implicitly pushes the domain onto the domain stack managed by the domain module (see domain.exit() for details on the domain stack). The call to enter delimits the beginning of a chain of asynchronous calls and I/O operations bound to a domain.

enter函数对于runbindintercept来说就像它们的管道系统:它们使用enter函数来设置有效域。enter函数对于域设定了domain.activeprocess.domain ,还隐式地将域推入了由域模块管理的域栈(关于域栈的细节详见domain.exit())。enter函数的调用,分隔了异步调用链以及绑定到一个域的I/O操作的结束或中断。

Calling enter changes only the active domain, and does not alter the domain itself. Enter and exit can be called an arbitrary number of times on a single domain.

调用enter仅仅改变活动的域,而不改变域本身。 Enterexit在一个单独的域可以被调用任意多次。

If the domain on which enter is called has been disposed, enter will return without setting the domain.

如果域的enter已经设置,enter将不设置域就返回。

domain.exit()#

The exit method exits the current domain, popping it off the domain stack. Any time execution is going to switch to the context of a different chain of asynchronous calls, it's important to ensure that the current domain is exited. The call to exit delimits either the end of or an interruption to the chain of asynchronous calls and I/O operations bound to a domain.

exit函数退出当前的域,将当前域从域的栈里移除。每当当程序的执行流程准要切换到一个不同的异步调用链的上下文时,要保证退出当前的域。exit函数的调用,分隔了异步调用链以及绑定到一个域的I/O操作的结束或中断。

If there are multiple, nested domains bound to the current execution context, exit will exit any domains nested within this domain.

如果有多个嵌套的域绑定到当前的执行上下文, 退出将退出在这个域里的所有的嵌套。

Calling exit changes only the active domain, and does not alter the domain itself. Enter and exit can be called an arbitrary number of times on a single domain.

调用exit只会改变有效域,而不会改变域自身。在一个单一域上,Enterexit可以被调用任意次。

If the domain on which exit is called has been disposed, exit will return without exiting the domain.

如果在这个域名下exit 已经被设置,exit 将不退出域返回。

domain.dispose()#

稳定度: 0 - 已过时。请通过设置在域上的错误事件处理器,显式地东失败的IO操作中恢复。

Once dispose has been called, the domain will no longer be used by callbacks bound into the domain via run, bind, or intercept, and a dispose event is emitted.

一旦dispose被调用,通过runbindintercept绑定到这个域的回调函数将不再使用这个域,并且一个dispose事件会被分发。

Buffer#

稳定度: 3 - 稳定

Pure JavaScript is Unicode friendly but not nice to binary data. When dealing with TCP streams or the file system, it's necessary to handle octet streams. Node has several strategies for manipulating, creating, and consuming octet streams.

纯javascript对Unicode比较友好,但是无法很好地处理二进制数据。当我们面对TCP流或者文件系统的时候,是需要处理八位字节流的。Node有几种操作,创建和消化八位字节流的策略。

Raw data is stored in instances of the Buffer class. A Buffer is similar to an array of integers but corresponds to a raw memory allocation outside the V8 heap. A Buffer cannot be resized.

原始数据保存在 Buffer 类的实例中。一个 Buffer 实例类似于一个整数数组,但对应者 V8 堆之外的一个原始内存分配区域。一个 Buffer 的大小不可变。

The Buffer class is a global, making it very rare that one would need to ever require('buffer').

Buffer 类是一个全局的类,所以它很罕有地不需要require语句就可以调用。

Converting between Buffers and JavaScript string objects requires an explicit encoding method. Here are the different string encodings.

在Buffers和JavaScript string转换时,需要明确的一个编码方法。下面是一些不同的string编码。

  • 'ascii' - for 7 bit ASCII data only. This encoding method is very fast, and will strip the high bit if set.

  • 'ascii' - 仅适用 7 bit ASCII 格式数据。这个编码方式非常快速,而且会剥离设置过高的bit。

  • 'utf8' - Multibyte encoded Unicode characters. Many web pages and other document formats use UTF-8.

  • 'utf8' - 多字节编码 Unicode字符。很多网页或者其他文档的编码格式都是使用 UTF-8的。

  • 'utf16le' - 2 or 4 bytes, little endian encoded Unicode characters. Surrogate pairs (U+10000 to U+10FFFF) are supported.

  • 'utf16le' - 2 或者 4 字节, Little Endian (LE) 编码Unicode字符。 代理对 (U+10000 to U+10FFFF) 是支持的.(BE和LE表示大端和小端,Little-Endian就是低位字节排放在内存的低地址端,高位字节排放在内存的高地址端;Big-Endian就是高位字节排放在内存的低地址端,低位字节排放在内存的高地址端;下同)

  • 'ucs2' - Alias of 'utf16le'.

  • 'ucs2' - 'utf16le'的别名.

  • 'base64' - Base64 string encoding.

  • 'base64' - Base64 字符串编码。

  • 'binary' - A way of encoding raw binary data into strings by using only the first 8 bits of each character. This encoding method is deprecated and should be avoided in favor of Buffer objects where possible. This encoding will be removed in future versions of Node.

  • 'binary' - 一个将原始2进制数据编码为字符串的方法,仅使用每个字符的前8bits。 这个编码方式已经被弃用而且应该被避免,尽可能的使用Buffer对象。这个编码方式将会在未来的Node版本中移除。

  • 'hex' - Encode each byte as two hexadecimal characters.

  • 'hex' - 把每个byte编码成2个十六进制字符

类: Buffer#

The Buffer class is a global type for dealing with binary data directly. It can be constructed in a variety of ways.

Buffer 类是一个全局变量类型,用来直接处理2进制数据的。 它能够使用多种方式构建。

new Buffer(size)#

  • size Number

  • size Number

Allocates a new buffer of size octets.

分配一个新的 buffer 大小是 size 的8位字节.

new Buffer(array)#

  • array Array

  • array Array

Allocates a new buffer using an array of octets.

分配一个新的 buffer 使用一个8位字节 array 数组.

new Buffer(str, [encoding])#

  • str String - string to encode.
  • encoding String - encoding to use, Optional.

  • str String类型 - 需要存入buffer的string字符串.

  • encoding String类型 - 使用什么编码方式,参数可选.

Allocates a new buffer containing the given str. encoding defaults to 'utf8'.

分配一个新的 buffer ,其中包含着给定的 str字符串. encoding 编码方式默认是:'utf8'.

类方法: Buffer.isEncoding(encoding)#

  • encoding String The encoding string to test

  • encoding String 用来测试给定的编码字符串

Returns true if the encoding is a valid encoding argument, or false otherwise.

如果给定的编码 encoding 是有效的,返回 true,否则返回 false。

类方法: Buffer.isBuffer(obj)#

  • obj Object
  • Return: Boolean

  • obj Object

  • 返回: Boolean

Tests if obj is a Buffer.

测试这个 obj 是否是一个 Buffer.

类方法: Buffer.byteLength(string, [encoding])#

  • string String
  • encoding String, Optional, Default: 'utf8'
  • Return: Number

  • string String类型

  • encoding String类型, 可选参数, 默认是: 'utf8'
  • Return: Number类型

Gives the actual byte length of a string. encoding defaults to 'utf8'. This is not the same as String.prototype.length since that returns the number of characters in a string.

将会返回这个字符串真实byte长度。 encoding 编码默认是: 'utf8'. 这个和 String.prototype.length 是不一样的,因为那个方法返回这个字符串中有几个字符的数量。 (译者:当用户在写http响应头Cotent-Length的时候,千万记得一定要用 Buffer.byteLength 方法,不要使用 String.prototype.length

Example:

示例:

// ½ + ¼ = ¾: 9 characters, 12 bytes

类方法: Buffer.concat(list, [totalLength])#

  • list Array List of Buffer objects to concat
  • totalLength Number Total length of the buffers when concatenated

  • list Array数组类型,Buffer数组,用于被连接。

  • totalLength Number类型 上述Buffer数组的所有Buffer的总大小。(译者:注意这里的totalLength不是数组长度是数组里Buffer实例的大小总和)

Returns a buffer which is the result of concatenating all the buffers in the list together.

返回一个保存着将传入buffer数组中所有buffer对象拼接在一起的buffer对象。(译者:有点拗口,其实就是将数组中所有的buffer实例通过复制拼接在一起)

If the list has no items, or if the totalLength is 0, then it returns a zero-length buffer.

如果传入的数组没有内容,或者 totalLength 参数是0,那将返回一个zero-length的buffer。

If the list has exactly one item, then the first item of the list is returned.

如果数组中只有一项,那么这第一项就会被返回。

If the list has more than one item, then a new Buffer is created.

如果数组中的项多于一个,那么一个新的Buffer实例将被创建。

If totalLength is not provided, it is read from the buffers in the list. However, this adds an additional loop to the function, so it is faster to provide the length explicitly.

如果 totalLength 参数没有提供,虽然会从buffer数组中计算读取,但是会增加一个额外的循环来计算它,所以提供一个明确的 totalLength 参数将会更快。

buf.length#

  • Number

  • Number类型

The size of the buffer in bytes. Note that this is not necessarily the size of the contents. length refers to the amount of memory allocated for the buffer object. It does not change when the contents of the buffer are changed.

这个buffer的bytes大小。注意这未必是这buffer里面内容的大小。length 的依据是buffer对象所分配的内存数值,它不会随着这个buffer对象内容的改变而改变。

// 1234
// 1234

buf.write(string, [offset], [length], [encoding])#

  • string String - data to be written to buffer
  • offset Number, Optional, Default: 0
  • length Number, Optional, Default: buffer.length - offset
  • encoding String, Optional, Default: 'utf8'

  • string String类型 - 将要被写入 buffer 的数据

  • offset Number类型, 可选参数, 默认: 0
  • length Number类型, 可选参数, 默认: buffer.length - offset
  • encoding String类型, 可选参数, 默认: 'utf8'

Writes string to the buffer at offset using the given encoding. offset defaults to 0, encoding defaults to 'utf8'. length is the number of bytes to write. Returns number of octets written. If buffer did not contain enough space to fit the entire string, it will write a partial amount of the string. length defaults to buffer.length - offset. The method will not write partial characters.

根据参数 offset 偏移量和指定的encoding编码方式,将参数 string 数据写入buffer。 offset偏移量 默认是 0, encoding编码方式默认是 'utf8'length长度是将要写入的字符串的bytes大小。 返回number类型,表示多少8位字节流被写入了。如果buffer 没有足够的空间来放入整个string,它将只会写入部分的字符串。 length 默认是 buffer.length - offset。 这个方法不会出现写入部分字符。

buf = new Buffer(256);
len = buf.write('\u00bd + \u00bc = \u00be', 0);
console.log(len + " bytes: " + buf.toString('utf8', 0, len));

buf.toString([encoding], [start], [end])#

  • encoding String, Optional, Default: 'utf8'
  • start Number, Optional, Default: 0
  • end Number, Optional, Default: buffer.length

  • encoding String类型, 可选参数, 默认: 'utf8'

  • start Number类型, 可选参数, 默认: 0
  • end Number类型, 可选参数, 默认: buffer.length

Decodes and returns a string from buffer data encoded with encoding (defaults to 'utf8') beginning at start (defaults to 0) and ending at end (defaults to buffer.length).

根据 encoding参数(默认是 'utf8')返回一个解码的 string 类型。还会根据传入的参数 start (默认是0) 和 end (默认是 buffer.length)作为取值范围。

See buffer.write() example, above.

查看上面buffer.write() 的例子.

buf.toJSON()#

Returns a JSON-representation of the Buffer instance. JSON.stringify implicitly calls this function when stringifying a Buffer instance.

返回一个 JSON表示的Buffer实例。JSON.stringify将会默认调用来字符串序列化这个Buffer实例。

Example:

示例:

console.log(copy);
// <Buffer 74 65 73 74>

buf[index]#

Get and set the octet at index. The values refer to individual bytes, so the legal range is between 0x00 and 0xFF hex or 0 and 255.

获取或者设置在指定index索引位置的8位字节。这个值是指单个字节,所以这个值必须在合法的范围,16进制的0x000xFF,或者0255

Example: copy an ASCII string into a buffer, one byte at a time:

例子: 拷贝一个 ASCII 编码的 string 字符串到一个 buffer, 一次一个 byte 进行拷贝:

// node.js

buf.copy(targetBuffer, [targetStart], [sourceStart], [sourceEnd])#

  • targetBuffer Buffer object - Buffer to copy into
  • targetStart Number, Optional, Default: 0
  • sourceStart Number, Optional, Default: 0
  • sourceEnd Number, Optional, Default: buffer.length

  • targetBuffer Buffer 类型对象 - 将要进行拷贝的Buffer

  • targetStart Number类型, 可选参数, 默认: 0
  • sourceStart Number类型, 可选参数, 默认: 0
  • sourceEnd Number类型, 可选参数, 默认: buffer.length

Does copy between buffers. The source and target regions can be overlapped. targetStart and sourceStart default to 0. sourceEnd defaults to buffer.length.

进行buffer的拷贝,源和目标可以是重叠的。 targetStart 目标开始偏移 和sourceStart源开始偏移 默认都是 0. sourceEnd 源结束位置偏移默认是源的长度 buffer.length.

All values passed that are undefined/NaN or are out of bounds are set equal to their respective defaults.

如果传递的值是undefined/NaN 或者是 out of bounds 超越边界的,就将设置为他们的默认值。(译者:这个默认值下面有的例子有说明)

Example: build two Buffers, then copy buf1 from byte 16 through byte 19 into buf2, starting at the 8th byte in buf2.

例子: 创建2个Buffer,然后把将buf1的16位到19位 拷贝到 buf2中,并且从buf2的第8位开始拷贝。

// !!!!!!!!qrst!!!!!!!!!!!!!

buf.slice([start], [end])#

  • start Number, Optional, Default: 0
  • end Number, Optional, Default: buffer.length

  • start Number类型, 可选参数, 默认: 0

  • end Number类型, 可选参数, 默认: buffer.length

Returns a new buffer which references the same memory as the old, but offset and cropped by the start (defaults to 0) and end (defaults to buffer.length) indexes. Negative indexes start from the end of the buffer.

返回一个新的buffer,这个buffer将会和老的buffer引用相同的内存地址,只是根据 start (默认是 0) 和end (默认是buffer.length) 偏移和裁剪了索引。 负的索引是从buffer尾部开始计算的。

Modifying the new buffer slice will modify memory in the original buffer!

修改这个新的buffer实例slice切片,也会改变原来的buffer

Example: build a Buffer with the ASCII alphabet, take a slice, then modify one byte from the original Buffer.

例子: 创建一个ASCII 字母的 Buffer,对它slice切片,然后修改源Buffer上的一个byte。

// abc
// !bc

buf.readUInt8(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads an unsigned 8 bit integer from the buffer at the specified offset.

从这个buffer对象里,根据指定的偏移量,读取一个 unsigned 8 bit integer整形。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x3
// 0x4
// 0x23
// 0x42

buf.readUInt16LE(offset, [noAssert])#

buf.readUInt16BE(offset, [noAssert])#

buf.readUInt16LE(offset, [noAssert])#

buf.readUInt16BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads an unsigned 16 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用特殊的 endian字节序格式读取一个 unsigned 16 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x0304
// 0x0403
// 0x0423
// 0x2304
// 0x2342
// 0x4223

buf.readUInt32LE(offset, [noAssert])#

buf.readUInt32BE(offset, [noAssert])#

buf.readUInt32LE(offset, [noAssert])#

buf.readUInt32BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads an unsigned 32 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 unsigned 32 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x03042342
// 0x42230403

buf.readInt8(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a signed 8 bit integer from the buffer at the specified offset.

从这个buffer对象里,根据指定的偏移量,读取一个 signed 8 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Works as buffer.readUInt8, except buffer contents are treated as two's complement signed values.

buffer.readUInt8一样的返回,除非buffer中包含了有作为2的补码的有符号值。

buf.readInt16LE(offset, [noAssert])#

buf.readInt16BE(offset, [noAssert])#

buf.readInt16LE(offset, [noAssert])#

buf.readInt16BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a signed 16 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用特殊的 endian字节序格式读取一个 signed 16 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Works as buffer.readUInt16*, except buffer contents are treated as two's complement signed values.

和 buffer.readUInt16一样返回,除非buffer中包含了有作为2的补码的有符号值。

buf.readInt32LE(offset, [noAssert])#

buf.readInt32BE(offset, [noAssert])#

buf.readInt32LE(offset, [noAssert])#

buf.readInt32BE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a signed 32 bit integer from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 signed 32 bit integer。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Works as buffer.readUInt32*, except buffer contents are treated as two's complement signed values.

和 buffer.readUInt32一样返回,除非buffer中包含了有作为2的补码的有符号值。

buf.readFloatLE(offset, [noAssert])#

buf.readFloatBE(offset, [noAssert])#

buf.readFloatLE(offset, [noAssert])#

buf.readFloatBE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a 32 bit float from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 32 bit float。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0x01

buf.readDoubleLE(offset, [noAssert])#

buf.readDoubleBE(offset, [noAssert])#

buf.readDoubleLE(offset, [noAssert])#

buf.readDoubleBE(offset, [noAssert])#

  • offset Number
  • noAssert Boolean, Optional, Default: false
  • Return: Number

  • offset Number类型

  • noAssert Boolean类型, 可选参数, 默认: false
  • Return: Number类型

Reads a 64 bit double from the buffer at the specified offset with specified endian format.

从这个buffer对象里,根据指定的偏移量,使用指定的 endian字节序格式读取一个 64 bit double。

Set noAssert to true to skip validation of offset. This means that offset may be beyond the end of the buffer. Defaults to false.

设置参数 noAssert为true表示忽略验证offset偏移量参数。 这意味着 offset可能会超出buffer的末尾。默认是 false

Example:

示例:

// 0.3333333333333333

buf.writeUInt8(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset. Note, value must be a valid unsigned 8 bit integer.

根据指定的offset偏移量将value写入buffer。注意:value 必须是一个合法的unsigned 8 bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer 03 04 23 42>

buf.writeUInt16LE(value, offset, [noAssert])#

buf.writeUInt16BE(value, offset, [noAssert])#

buf.writeUInt16LE(value, offset, [noAssert])#

buf.writeUInt16BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid unsigned 16 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的unsigned 16 bit integer.

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer de ad be ef>
// <Buffer ad de ef be>

buf.writeUInt32LE(value, offset, [noAssert])#

buf.writeUInt32BE(value, offset, [noAssert])#

buf.writeUInt32LE(value, offset, [noAssert])#

buf.writeUInt32BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid unsigned 32 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的unsigned 32 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer fe ed fa ce>
// <Buffer ce fa ed fe>

buf.writeInt8(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset. Note, value must be a valid signed 8 bit integer.

根据指定的offset偏移量将value写入buffer。注意:value 必须是一个合法的 signed 8 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Works as buffer.writeUInt8, except value is written out as a two's complement signed integer into buffer.

buffer.writeUInt8 一样工作,除非是把有2的补码的 signed integer 有符号整形写入buffer

buf.writeInt16LE(value, offset, [noAssert])#

buf.writeInt16BE(value, offset, [noAssert])#

buf.writeInt16LE(value, offset, [noAssert])#

buf.writeInt16BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid signed 16 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的 signed 16 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Works as buffer.writeUInt16*, except value is written out as a two's complement signed integer into buffer.

buffer.writeUInt16* 一样工作,除非是把有2的补码的 signed integer 有符号整形写入buffer

buf.writeInt32LE(value, offset, [noAssert])#

buf.writeInt32BE(value, offset, [noAssert])#

buf.writeInt32LE(value, offset, [noAssert])#

buf.writeInt32BE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid signed 32 bit integer.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个合法的 signed 32 bit integer。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Works as buffer.writeUInt32*, except value is written out as a two's complement signed integer into buffer.

buffer.writeUInt32* 一样工作,除非是把有2的补码的 signed integer 有符号整形写入buffer

buf.writeFloatLE(value, offset, [noAssert])#

buf.writeFloatBE(value, offset, [noAssert])#

buf.writeFloatLE(value, offset, [noAssert])#

buf.writeFloatBE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, behavior is unspecified if value is not a 32 bit float.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:当value 不是一个 32 bit float 类型的值时,结果将是不确定的。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer 4f 4a fe bb>
// <Buffer bb fe 4a 4f>

buf.writeDoubleLE(value, offset, [noAssert])#

buf.writeDoubleBE(value, offset, [noAssert])#

buf.writeDoubleLE(value, offset, [noAssert])#

buf.writeDoubleBE(value, offset, [noAssert])#

  • value Number
  • offset Number
  • noAssert Boolean, Optional, Default: false

  • value Number类型

  • offset Number类型
  • noAssert Boolean类型, 可选参数, 默认: false

Writes value to the buffer at the specified offset with specified endian format. Note, value must be a valid 64 bit double.

根据指定的offset偏移量和指定的 endian字节序格式将value写入buffer。注意:value 必须是一个有效的 64 bit double 类型的值。

Set noAssert to true to skip validation of value and offset. This means that value may be too large for the specific function and offset may be beyond the end of the buffer leading to the values being silently dropped. This should not be used unless you are certain of correctness. Defaults to false.

设置参数 noAssert为true表示忽略验证valueoffset参数。 这意味着 value可能过大,或者offset可能会超出buffer的末尾造成value被丢弃。 这个参数除了你非常有把握,否则不应该使用它。默认是 false。`.

Example:

示例:

// <Buffer 43 eb d5 b7 dd f9 5f d7>
// <Buffer d7 5f f9 dd b7 d5 eb 43>

buf.fill(value, [offset], [end])#

  • value
  • offset Number, Optional
  • end Number, Optional

  • value

  • offset Number类型, 可选参数
  • end Number类型, 可选参数

Fills the buffer with the specified value. If the offset (defaults to 0) and end (defaults to buffer.length) are not given it will fill the entire buffer.

使用指定的value来填充这个buffer。如果 offset (默认是 0) 并且 end (默认是 buffer.length) 没有明确给出,就会填充整个buffer。 (译者:buf.fill调用的是C语言的memset函数非常高效)

var b = new Buffer(50);
b.fill("h");

buffer.INSPECT_MAX_BYTES#

  • Number, Default: 50

  • Number类型, 默认: 50

How many bytes will be returned when buffer.inspect() is called. This can be overridden by user modules.

设置当调用buffer.inspect()方法后,多少bytes将会返回。这个值可以被用户模块重写。 (译者:这个值主要用在当我们打印console.log(buf)时,设置返回多少长度内容)

Note that this is a property on the buffer module returned by require('buffer'), not on the Buffer global, or a buffer instance.

注意这个属性是require('buffer')模块返回的。这个属性不是在全局变量Buffer中,也不再buffer的实例里。

类: SlowBuffer#

Returns an un-pooled Buffer.

返回一个不被池管理的 Buffer

In order to avoid the garbage collection overhead of creating many individually allocated Buffers, by default allocations under 4KB are sliced from a single larger allocated object. This approach improves both performance and memory usage since v8 does not need to track and cleanup as many Persistent objects.

为了避免创建大量独立分配的 Buffer 带来的垃圾回收开销,默认情况下小于 4KB 的空间都是切割自一个较大的独立对象。这种策略既提高了性能也改善了内存使用,因为 V8 不需要跟踪和清理很多 Persistent 对象。

In the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time it may be appropriate to create an un-pooled Buffer instance using SlowBuffer and copy out the relevant bits.

当开发者需要将池中一小块数据保留不确定的一段时间,较为妥当的办法是用 SlowBuffer 创建一个不被池管理的 Buffer 实例并将相应数据拷贝出来。

socket.on('readable', function() {
  var data = socket.read();
  // 为需要保留的数据分配内存
  var sb = new SlowBuffer(10);
  // 将数据拷贝到新的空间中
  data.copy(sb, 0, 0, 10);
  store.push(sb);
});

Though this should used sparingly and only be a last resort after a developer has actively observed undue memory retention in their applications.

请谨慎使用,仅作为开发者频繁观察到他们的应用中过度的内存保留时的最后手段。

#

稳定度: 2 - 不稳定

A stream is an abstract interface implemented by various objects in Node. For example a request to an HTTP server is a stream, as is stdout. Streams are readable, writable, or both. All streams are instances of EventEmitter

流是一个抽象接口,被 Node 中的很多对象所实现。比如对一个 HTTP 服务器的请求是一个流,stdout 也是一个流。流是可读、可写或兼具两者的。所有流都是 EventEmitter 的实例。

You can load the Stream base classes by doing require('stream'). There are base classes provided for Readable streams, Writable streams, Duplex streams, and Transform streams.

您可以通过 require('stream') 加载 Stream 基类,其中包括了 Readable 流、Writable 流、Duplex 流和 Transform 流的基类。

This document is split up into 3 sections. The first explains the parts of the API that you need to be aware of to use streams in your programs. If you never implement a streaming API yourself, you can stop there.

本文档分为三个章节。第一章节解释了您在您的程序中使用流时需要了解的那部分 API,如果您不打算自己实现一个流式 API,您可以只阅读这一章节。

The second section explains the parts of the API that you need to use if you implement your own custom streams yourself. The API is designed to make this easy for you to do.

第二章节解释了当您自己实现一个流时需要用到的那部分 API,这些 API 是为了方便您这么做而设计的。

The third section goes into more depth about how streams work, including some of the internal mechanisms and functions that you should probably not modify unless you definitely know what you are doing.

第三章节深入讲解了流的工作方式,包括一些内部机制和函数,除非您明确知道您在做什么,否则尽量不要改动它们。

面向流消费者的 API#

Streams can be either Readable, Writable, or both (Duplex).

流可以是可读(Readable)或可写(Writable),或者兼具两者(Duplex,双工)的。

All streams are EventEmitters, but they also have other custom methods and properties depending on whether they are Readable, Writable, or Duplex.

所有流都是 EventEmitter,但它们也具有其它自定义方法和属性,取决于它们是 Readable、Writable 或 Duplex。

If a stream is both Readable and Writable, then it implements all of the methods and events below. So, a Duplex or Transform stream is fully described by this API, though their implementation may be somewhat different.

如果一个流既可读(Readable)也可写(Writable),则它实现了下文所述的所有方法和事件。因此,这些 API 同时也涵盖了 DuplexTransform 流,即便它们的实现可能有点不同。

It is not necessary to implement Stream interfaces in order to consume streams in your programs. If you are implementing streaming interfaces in your own program, please also refer to API for Stream Implementors below.

为了消费流而在您的程序中自己实现 Stream 接口是没有必要的。如果您确实正在您自己的程序中实现流式接口,请同时参考下文面向流实现者的 API

Almost all Node programs, no matter how simple, use Streams in some way. Here is an example of using Streams in a Node program:

几乎所有 Node 程序,无论多简单,都在某种途径用到了流。这里有一个使用流的 Node 程序的例子:

var http = require('http');

var server = http.createServer(function (req, res) {
  // req is an http.IncomingMessage, which is a Readable Stream
  // res is an http.ServerResponse, which is a Writable Stream

var server = http.createServer(function (req, res) {
  // req 为 http.IncomingMessage,是一个可读流(Readable Stream)
  // res 为 http.ServerResponse,是一个可写流(Writable Stream)

  var body = '';
  // we want to get the data as utf8 strings
  // If you don't set an encoding, then you'll get Buffer objects
  req.setEncoding('utf8');

  var body = '';
  // 我们打算以 UTF-8 字符串的形式获取数据
  // 如果您不设置编码,您将得到一个 Buffer 对象
  req.setEncoding('utf8');

  // Readable streams emit 'data' events once a listener is added
  req.on('data', function (chunk) {
    body += chunk;
  })

  // 一旦监听器被添加,可读流会触发 'data' 事件
  req.on('data', function (chunk) {
    body += chunk;
  })

  // the end event tells you that you have entire body
  req.on('end', function () {
    try {
      var data = JSON.parse(body);
    } catch (er) {
      // uh oh!  bad json!
      res.statusCode = 400;
      return res.end('error: ' + er.message);
    }

  // 'end' 事件表明您已经得到了完整的 body
  req.on('end', function () {
    try {
      var data = JSON.parse(body);
    } catch (er) {
      // uh oh!  bad json!
      res.statusCode = 400;
      return res.end('错误: ' + er.message);
    }

    // write back something interesting to the user:
    res.write(typeof data);
    res.end();
  })
})

    // 向用户回写一些有趣的信息
    res.write(typeof data);
    res.end();
  })
})

server.listen(1337);

server.listen(1337);

// $ curl localhost:1337 -d '{}'
// object
// $ curl localhost:1337 -d '"foo"'
// string
// $ curl localhost:1337 -d 'not json'
// 错误: Unexpected token o

类: stream.Readable#

The Readable stream interface is the abstraction for a source of data that you are reading from. In other words, data comes out of a Readable stream.

Readable(可读)流接口是对您正在读取的数据的来源的抽象。换言之,数据出自一个 Readable 流。

A Readable stream will not start emitting data until you indicate that you are ready to receive it.

在您表明您就绪接收之前,Readable 流并不会开始发生数据。

Readable streams have two "modes": a flowing mode and a paused mode. When in flowing mode, data is read from the underlying system and provided to your program as fast as possible. In paused mode, you must explicitly call stream.read() to get chunks of data out. Streams start out in paused mode.

Readable 流有两种“模式”:流动模式暂停模式。当处于流动模式时,数据由底层系统读出,并尽可能快地提供给您的程序;当处于暂停模式时,您必须明确地调用 stream.read() 来取出若干数据块。流默认处于暂停模式。

Note: If no data event handlers are attached, and there are no pipe() destinations, and the stream is switched into flowing mode, then data will be lost.

注意:如果没有绑定 data 事件处理器,并且没有 pipe() 目标,同时流被切换到流动模式,那么数据会流失。

You can switch to flowing mode by doing any of the following:

您可以通过下面几种做法切换到流动模式:

You can switch back to paused mode by doing either of the following:

您可以通过下面其中一种做法切换回暂停模式:

  • If there are no pipe destinations, by calling the pause() method.
  • If there are pipe destinations, by removing any 'data' event handlers, and removing all pipe destinations by calling the unpipe() method.

  • 如果没有导流目标,调用 pause() 方法。

  • 如果有导流目标,移除所有 ['data' 事件][] 处理器、调用 unpipe() 方法移除所有导流目标。

Note that, for backwards compatibility reasons, removing 'data' event handlers will not automatically pause the stream. Also, if there are piped destinations, then calling pause() will not guarantee that the stream will remain paused once those destinations drain and ask for more data.

请注意,为了向后兼容考虑,移除 'data' 事件监听器并不会自动暂停流。同样的,当有导流目标时,调用 pause() 并不能保证流在那些目标排空并请求更多数据时维持暂停状态。

Examples of readable streams include:

一些可读流的例子:

事件: 'readable'#

When a chunk of data can be read from the stream, it will emit a 'readable' event.

当一个数据块可以从流中被读出时,它会触发一个 'readable' 事件。

In some cases, listening for a 'readable' event will cause some data to be read into the internal buffer from the underlying system, if it hadn't already.

在某些情况下,假如未准备好,监听一个 'readable' 事件会使得一些数据从底层系统被读出到内部缓冲区中。

var readable = getReadableStreamSomehow();
readable.on('readable', function() {
  // 现在有数据可以读了
})

Once the internal buffer is drained, a readable event will fire again when more data is available.

当内部缓冲区被排空后,一旦更多数据时,一个 readable 事件会被再次触发。

事件: 'data'#

  • chunk Buffer | String The chunk of data.

  • chunk Buffer | String 数据块。

Attaching a data event listener to a stream that has not been explicitly paused will switch the stream into flowing mode. Data will then be passed as soon as it is available.

绑定一个 data 事件监听器到一个未被明确暂停的流会将流切换到流动模式,数据会被尽可能地传递。

If you just want to get all the data out of the stream as fast as possible, this is the best way to do so.

如果您想从流尽快取出所有数据,这是最理想的方式。

var readable = getReadableStreamSomehow();
readable.on('data', function(chunk) {
  console.log('得到了 %d 字节的数据', chunk.length);
})

事件: 'end'#

This event fires when no more data will be provided.

该事件会在没有更多数据能够提供时被触发。

Note that the end event will not fire unless the data is completely consumed. This can be done by switching into flowing mode, or by calling read() repeatedly until you get to the end.

请注意,end 事件在数据被完全消费之前不会被触发。这可通过切换到流动模式,或者在到达末端前不断调用 read() 来实现。

var readable = getReadableStreamSomehow();
readable.on('data', function(chunk) {
  console.log('得到了 %d 字节的数据', chunk.length);
})
readable.on('end', function() {
  console.log('读取完毕。');
});

事件: 'close'#

Emitted when the underlying resource (for example, the backing file descriptor) has been closed. Not all streams will emit this.

当底层数据源(比如,源头的文件描述符)被关闭时触发。并不是所有流都会触发这个事件。

事件: 'error'#

Emitted if there was an error receiving data.

当数据接收时发生错误时触发。

readable.read([size])#

  • size Number Optional argument to specify how much data to read.
  • Return String | Buffer | null

  • size Number 可选参数,指定要读取多少数据。

  • 返回 String | Buffer | null

The read() method pulls some data out of the internal buffer and returns it. If there is no data available, then it will return null.

read() 方法从内部缓冲区中拉取并返回若干数据。当没有更多数据可用时,它会返回 null

If you pass in a size argument, then it will return that many bytes. If size bytes are not available, then it will return null.

若您传入了一个 size 参数,那么它会返回相当字节的数据;当 size 字节不可用时,它则返回 null

If you do not specify a size argument, then it will return all the data in the internal buffer.

若您没有指定 size 参数,那么它会返回内部缓冲区中的所有数据。

This method should only be called in paused mode. In flowing mode, this method is called automatically until the internal buffer is drained.

该方法仅应在暂停模式时被调用。在流动模式中,该方法会被自动调用直到内部缓冲区排空。

var readable = getReadableStreamSomehow();
readable.on('readable', function() {
  var chunk;
  while (null !== (chunk = readable.read())) {
    console.log('得到了 %d 字节的数据', chunk.length);
  }
});

If this method returns a data chunk, then it will also trigger the emission of a 'data' event.

当该方法返回了一个数据块,它同时也会触发 'data' 事件

readable.setEncoding(encoding)#

  • encoding String The encoding to use.
  • Return: this

  • encoding String 要使用的编码。

  • 返回: this

Call this function to cause the stream to return strings of the specified encoding instead of Buffer objects. For example, if you do readable.setEncoding('utf8'), then the output data will be interpreted as UTF-8 data, and returned as strings. If you do readable.setEncoding('hex'), then the data will be encoded in hexadecimal string format.

调用此函数会使得流返回指定编码的字符串而不是 Buffer 对象。比如,当您 readable.setEncoding('utf8'),那么输出数据会被作为 UTF-8 数据解析,并以字符串返回。如果您 readable.setEncoding('hex'),那么数据会被编码成十六进制字符串格式。

This properly handles multi-byte characters that would otherwise be potentially mangled if you simply pulled the Buffers directly and called buf.toString(encoding) on them. If you want to read the data as strings, always use this method.

该方法能正确处理多字节字符。假如您不这么做,仅仅直接取出 Buffer 并对它们调用 buf.toString(encoding),很可能会导致字节错位。因此如果您打算以字符串读取数据,请总是使用这个方法。

var readable = getReadableStreamSomehow();
readable.setEncoding('utf8');
readable.on('data', function(chunk) {
  assert.equal(typeof chunk, 'string');
  console.log('得到了 %d 个字符的字符串数据', chunk.length);
})

readable.resume()#

  • Return: this

  • 返回: this

This method will cause the readable stream to resume emitting data events.

该方法让可读流继续触发 data 事件。

This method will switch the stream into flowing mode. If you do not want to consume the data from a stream, but you do want to get to its end event, you can call readable.resume() to open the flow of data.

该方法会将流切换到流动模式。如果您不想从流中消费数据,但您得到它的 end 事件,您可以调用 readable.resume() 来启动数据流。

var readable = getReadableStreamSomehow();
readable.resume();
readable.on('end', function(chunk) {
  console.log('到达末端,但并未读取任何东西');
})

readable.pause()#

  • Return: this

  • 返回: this

This method will cause a stream in flowing mode to stop emitting data events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

该方法会使一个处于流动模式的流停止触发 data 事件,切换到非流动模式,并让后续可用数据留在内部缓冲区中。

var readable = getReadableStreamSomehow();
readable.on('data', function(chunk) {
  console.log('取得 %d 字节数据', chunk.length);
  readable.pause();
  console.log('接下来 1 秒内不会有数据');
  setTimeout(function() {
    console.log('现在数据会再次开始流动');
    readable.resume();
  }, 1000);
})

readable.pipe(destination, [options])#

  • destination Writable Stream The destination for writing data
  • options Object Pipe options

    • end Boolean End the writer when the reader ends. Default = true
  • destination Writable Stream 写入数据的目标

  • options Object 导流选项
    • end Boolean 在读取者结束时结束写入者。缺省为 true

This method pulls all the data out of a readable stream, and writes it to the supplied destination, automatically managing the flow so that the destination is not overwhelmed by a fast readable stream.

该方法从可读流中拉取所有数据,并写入到所提供的目标。该方法能自动控制流量以避免目标被快速读取的可读流所淹没。

Multiple destinations can be piped to safely.

可以导流到多个目标。

var readable = getReadableStreamSomehow();
var writable = fs.createWriteStream('file.txt');
// 所有来自 readable 的数据会被写入到 'file.txt'
readable.pipe(writable);

This function returns the destination stream, so you can set up pipe chains like so:

该函数返回目标流,因此您可以建立导流链:

var r = fs.createReadStream('file.txt');
var z = zlib.createGzip();
var w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);

For example, emulating the Unix cat command:

例如,模拟 Unix 的 cat 命令:

process.stdin.pipe(process.stdout);

By default end() is called on the destination when the source stream emits end, so that destination is no longer writable. Pass { end: false } as options to keep the destination stream open.

缺省情况下当来源流触发 end 时目标的 end() 会被调用,所以此时 destination 不再可写。传入 { end: false } 作为 options 可以让目标流保持开启状态。

This keeps writer open so that "Goodbye" can be written at the end.

这将让 writer 保持开启,因此最后可以写入 "Goodbye"。

reader.pipe(writer, { end: false });
reader.on('end', function() {
  writer.end('Goodbye\n');
});

Note that process.stderr and process.stdout are never closed until the process exits, regardless of the specified options.

请注意 process.stderrprocess.stdout 在进程结束前都不会被关闭,无论是否指定选项。

readable.unpipe([destination])#

  • destination Writable Stream Optional specific stream to unpipe

  • destination Writable Stream 可选,指定解除导流的流

This method will remove the hooks set up for a previous pipe() call.

该方法会移除之前调用 pipe() 所设定的钩子。

If the destination is not specified, then all pipes are removed.

如果不指定目标,所有导流都会被移除。

If the destination is specified, but no pipe is set up for it, then this is a no-op.

如果指定了目标,但并没有与之建立导流,则什么事都不会发生。

var readable = getReadableStreamSomehow();
var writable = fs.createWriteStream('file.txt');
// 来自 readable 的所有数据都会被写入 'file.txt',
// 但仅发生在第 1 秒
readable.pipe(writable);
setTimeout(function() {
  console.log('停止写入到 file.txt');
  readable.unpipe(writable);
  console.log('自行关闭文件流');
  writable.end();
}, 1000);

readable.unshift(chunk)#

  • chunk Buffer | String Chunk of data to unshift onto the read queue

  • chunk Buffer | String 要插回读取队列开头的数据块

This is useful in certain cases where a stream is being consumed by a parser, which needs to "un-consume" some data that it has optimistically pulled out of the source, so that the stream can be passed on to some other party.

该方法在许多场景中都很有用,比如一个流正在被一个解析器消费,解析器可能需要将某些刚拉取出的数据“逆消费”回来源,以便流能将它传递给其它消费者。

If you find that you must often call stream.unshift(chunk) in your programs, consider implementing a Transform stream instead. (See API for Stream Implementors, below.)

如果您发现您需要在您的程序中频繁调用 stream.unshift(chunk),请考虑实现一个 Transform 流。(详见下文面向流实现者的 API。)

// 取出以 \n\n 分割的头部并将多余部分 unshift() 回去
// callback 以 (error, header, stream) 形式调用
var StringDecoder = require('string_decoder').StringDecoder;
function parseHeader(stream, callback) {
  stream.on('error', callback);
  stream.on('readable', onReadable);
  var decoder = new StringDecoder('utf8');
  var header = '';
  function onReadable() {
    var chunk;
    while (null !== (chunk = stream.read())) {
      var str = decoder.write(chunk);
      if (str.match(/\n\n/)) {
        // 找到头部边界
        var split = str.split(/\n\n/);
        header += split.shift();
        var remaining = split.join('\n\n');
        var buf = new Buffer(remaining, 'utf8');
        if (buf.length)
          stream.unshift(buf);
        stream.removeListener('error', callback);
        stream.removeListener('readable', onReadable);
        // 现在可以从流中读取消息的主体了
        callback(null, header, stream);
      } else {
        // 仍在读取头部
        header += str;
      }
    }
  }
}

readable.wrap(stream)#

  • stream Stream An "old style" readable stream

  • stream Stream 一个“旧式”可读流

Versions of Node prior to v0.10 had streams that did not implement the entire Streams API as it is today. (See "Compatibility" below for more information.)

Node v0.10 版本之前的流并未实现现今所有流 API。(更多信息详见下文“兼容性”章节。)

If you are using an older Node library that emits 'data' events and has a pause() method that is advisory only, then you can use the wrap() method to create a Readable stream that uses the old stream as its data source.

如果您正在使用早前版本的 Node 库,它触发 'data' 事件并且有一个仅作查询用途的 pause() 方法,那么您可以使用 wrap() 方法来创建一个使用旧式流作为数据源的 Readable 流。

You will very rarely ever need to call this function, but it exists as a convenience for interacting with old Node programs and libraries.

您可能很少需要用到这个函数,但它会作为与旧 Node 程序和库交互的简便方法存在。

For example:

例如:

myReader.on('readable', function() {
myReader.read(); // etc.
});

类: stream.Writable#

The Writable stream interface is an abstraction for a destination that you are writing data to.

Writable(可写)流接口是对您正在写入数据至一个目标的抽象。

Examples of writable streams include:

一些可写流的例子:

writable.write(chunk, [encoding], [callback])#

  • chunk String | Buffer The data to write
  • encoding String The encoding, if chunk is a String
  • callback Function Callback for when this chunk of data is flushed
  • Returns: Boolean True if the data was handled completely.
  • chunk {String | Buffer} 要写入的数据
  • encoding {String} 编码,假如 chunk 是一个字符串
  • callback {Function} 数据块写入后的回调
  • 返回: {Boolean} 如果数据已被全部处理则 true

This method writes some data to the underlying system, and calls the supplied callback once the data has been fully handled.

该方法向底层系统写入数据,并在数据被处理完毕后调用所给的回调。

The return value indicates if you should continue writing right now. If the data had to be buffered internally, then it will return false. Otherwise, it will return true.

返回值表明您是否应该立即继续写入。如果数据需要滞留在内部,则它会返回 false;否则,返回 true

This return value is strictly advisory. You MAY continue to write, even if it returns false. However, writes will be buffered in memory, so it is best not to do this excessively. Instead, wait for the drain event before writing more data.

返回值所表示的状态仅供参考,您【可以】在即便返回 false 的时候继续写入。但是,写入的数据会被滞留在内存中,所以最好不要过分地这么做。最好的做法是等待 drain 事件发生后再继续写入更多数据。

事件: 'drain'#

If a writable.write(chunk) call returns false, then the drain event will indicate when it is appropriate to begin writing more data to the stream.

如果一个 writable.write(chunk) 调用返回 false,那么 drain 事件则表明可以继续向流写入更多数据。

// 向所给可写流写入 1000000 次数据。
// 注意后端压力。
function writeOneMillionTimes(writer, data, encoding, callback) {
  var i = 1000000;
  write();
  function write() {
    var ok = true;
    do {
      i -= 1;
      if (i === 0) {
        // 最后一次!
        writer.write(data, encoding, callback);
      } else {
        // 检查我们应该继续还是等待
        // 不要传递回调,因为我们还没完成。
        ok = writer.write(data, encoding);
      }
    } while (i > 0 && ok);
    if (i > 0) {
      // 不得不提前停止!
      // 一旦它排空,继续写入数据
      writer.once('drain', write);
    }
  }
}

writable.cork()#

Forces buffering of all writes.

强行滞留所有写入。

Buffered data will be flushed either at .uncork() or at .end() call.

滞留的数据会在 .uncork().end() 调用时被写入。

writable.uncork()#

Flush all data, buffered since .cork() call.

写入所有 .cork() 调用之后滞留的数据。

writable.end([chunk], [encoding], [callback])#

  • chunk String | Buffer Optional data to write
  • encoding String The encoding, if chunk is a String
  • callback Function Optional callback for when the stream is finished

  • chunk String | Buffer 可选,要写入的数据

  • encoding String 编码,假如 chunk 是一个字符串
  • callback Function 可选,流结束后的回调

Call this method when no more data will be written to the stream. If supplied, the callback is attached as a listener on the finish event.

当没有更多数据会被写入到流时调用此方法。如果给出,回调会被用作 finish 事件的监听器。

Calling write() after calling end() will raise an error.

在调用 end() 后调用 write() 会产生错误。

// 写入 'hello, ' 然后以 'world!' 结束
http.createServer(function (req, res) {
  res.write('hello, ');
  res.end('world!');
  // 现在不允许继续写入了
});

事件: 'finish'#

When the end() method has been called, and all data has been flushed to the underlying system, this event is emitted.

end() 方法被调用,并且所有数据已被写入到底层系统,此事件会被触发。

var writer = getWritableStreamSomehow();
for (var i = 0; i < 100; i ++) {
  writer.write('hello, #' + i + '!\n');
}
writer.end('this is the end\n');
write.on('finish', function() {
  console.error('已完成所有写入。');
});

事件: 'pipe'#

  • src Readable Stream source stream that is piping to this writable

  • src Readable Stream 导流到本可写流的来源流

This is emitted whenever the pipe() method is called on a readable stream, adding this writable to its set of destinations.

该事件发生于可读流的 pipe() 方法被调用并添加本可写流作为它的目标时。

var writer = getWritableStreamSomehow();
var reader = getReadableStreamSomehow();
writer.on('pipe', function(src) {
  console.error('某些东西正被导流到 writer');
  assert.equal(src, reader);
});
reader.pipe(writer);

事件: 'unpipe'#

This is emitted whenever the unpipe() method is called on a readable stream, removing this writable from its set of destinations.

该事件发生于可读流的 unpipe() 方法被调用并将本可写流从它的目标移除时。

var writer = getWritableStreamSomehow();
var reader = getReadableStreamSomehow();
writer.on('unpipe', function(src) {
  console.error('某写东西停止导流到 writer 了');
  assert.equal(src, reader);
});
reader.pipe(writer);
reader.unpipe(writer);

类: stream.Duplex#

Duplex streams are streams that implement both the Readable and Writable interfaces. See above for usage.

双工(Duplex)流同时实现了 ReadableWritable 的接口。详见下文用例。

Examples of Duplex streams include:

一些双工流的例子:

类: stream.Transform#

Transform streams are Duplex streams where the output is in some way computed from the input. They implement both the Readable and Writable interfaces. See above for usage.

转换(Transform)流是一种输出由输入计算所得的双工流。它们同时实现了 ReadableWritable 的接口。详见下文用例。

Examples of Transform streams include:

一些转换流的例子:

面向流实现者的 API#

To implement any sort of stream, the pattern is the same:

无论实现任何形式的流,模式都是一样的:

  1. Extend the appropriate parent class in your own subclass. (The util.inherits method is particularly helpful for this.)
  2. Call the appropriate parent class constructor in your constructor, to be sure that the internal mechanisms are set up properly.
  3. Implement one or more specific methods, as detailed below.

  4. 在您的子类中扩充适合的父类。(util.inherits 方法对此很有帮助。)

  5. 在您的构造函数中调用父类的构造函数,以确保内部的机制被正确初始化。
  6. 实现一个或多个特定的方法,参见下面的细节。

The class to extend and the method(s) to implement depend on the sort of stream class you are writing:

所扩充的类和要实现的方法取决于您要编写的流类的形式:

Use-case

Class

Method(s) to implement

Reading only

Readable

_read

Writing only

Writable

_write

Reading and writing

Duplex

_read, _write

Operate on written data, then read the result

Transform

_transform, _flush

使用情景

要实现的方法

只读

Readable

_read

只写

Writable

_write

读写

Duplex

_read, _write

操作被写入数据,然后读出结果

Transform

_transform, _flush

In your implementation code, it is very important to never call the methods described in API for Stream Consumers above. Otherwise, you can potentially cause adverse side effects in programs that consume your streaming interfaces.

在您的实现代码中,十分重要的一点是绝对不要调用上文面向流消费者的 API 中所描述的方法,否则可能在消费您的流接口的程序中产生潜在的副作用。

类: stream.Readable#

stream.Readable is an abstract class designed to be extended with an underlying implementation of the _read(size) method.

stream.Readable 是一个可被扩充的、实现了底层方法 _read(size) 的抽象类。

Please see above under API for Stream Consumers for how to consume streams in your programs. What follows is an explanation of how to implement Readable streams in your programs.

请阅读前文面向流消费者的 API 章节了解如何在您的程序中消费流。文将解释如何在您的程序中自己实现 Readable 流。

例子: 一个计数流#

This is a basic example of a Readable stream. It emits the numerals from 1 to 1,000,000 in ascending order, and then ends.

这是一个 Readable 流的基本例子。它将从 1 至 1,000,000 递增地触发数字,然后结束。

var Readable = require('stream').Readable;
var util = require('util');
util.inherits(Counter, Readable);

function Counter(opt) {
  Readable.call(this, opt);
  this._max = 1000000;
  this._index = 1;
}

function Counter(opt) {
  Readable.call(this, opt);
  this._max = 1000000;
  this._index = 1;
}

Counter.prototype._read = function() {
  var i = this._index++;
  if (i > this._max)
    this.push(null);
  else {
    var str = '' + i;
    var buf = new Buffer(str, 'ascii');
    this.push(buf);
  }
};

例子: SimpleProtocol v1 (Sub-optimal)#

This is similar to the parseHeader function described above, but implemented as a custom stream. Also, note that this implementation does not convert the incoming data to a string.

这个有点类似上文提到的 parseHeader 函数,但它被实现成一个自定义流。同样地,请注意这个实现并未将传入数据转换成字符串。

However, this would be better implemented as a Transform stream. See below for a better implementation.

实际上,更好的办法是将它实现成一个 Transform 流。更好的实现详见下文。

// 简易数据协议的解析器。
// “header”是一个 JSON 对象,后面紧跟 2 个 \n 字符,以及
// 消息主体。
//
// 注意: 使用 Transform 流能更简单地实现这个功能!
// 直接使用 Readable 并不是最佳方式,详见 Transform
// 章节下的备选例子。

var Readable = require('stream').Readable;
var util = require('util');

var Readable = require('stream').Readable;
var util = require('util');

util.inherits(SimpleProtocol, Readable);

util.inherits(SimpleProtocol, Readable);

function SimpleProtocol(source, options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

function SimpleProtocol(source, options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

  Readable.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;

  Readable.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;

  // source is a readable stream, such as a socket or file
  this._source = source;

  // source 是一个可读流,比如嵌套字或文件
  this._source = source;

  var self = this;
  source.on('end', function() {
    self.push(null);
  });

  var self = this;
  source.on('end', function() {
    self.push(null);
  });

  // give it a kick whenever the source is readable
  // read(0) will not consume any bytes
  source.on('readable', function() {
    self.read(0);
  });

  // 当 source 可读时做点什么
  // read(0) 不会消费任何字节
  source.on('readable', function() {
    self.read(0);
  });

  this._rawHeader = [];
  this.header = null;
}

  this._rawHeader = [];
  this.header = null;
}

SimpleProtocol.prototype._read = function(n) {
  if (!this._inBody) {
    var chunk = this._source.read();

SimpleProtocol.prototype._read = function(n) {
  if (!this._inBody) {
    var chunk = this._source.read();

    if (split === -1) {
      // 继续等待 \n\n
      // 暂存数据块,并再次尝试
      this._rawHeader.push(chunk);
      this.push('');
    } else {
      this._inBody = true;
      var h = chunk.slice(0, split);
      this._rawHeader.push(h);
      var header = Buffer.concat(this._rawHeader).toString();
      try {
        this.header = JSON.parse(header);
      } catch (er) {
        this.emit('error', new Error('invalid simple protocol data'));
        return;
      }
      // 现在,我们得到了一些多余的数据,所以需要 unshift
      // 将多余的数据放回读取队列以便我们的消费者能够读取
      var b = chunk.slice(split);
      this.unshift(b);

      // and let them know that we are done parsing the header.
      this.emit('header', this.header);
    }
  } else {
    // from there on, just provide the data to our consumer.
    // careful not to push(null), since that would indicate EOF.
    var chunk = this._source.read();
    if (chunk) this.push(chunk);
  }
};

      // 并让它们知道我们完成了头部解析。
      this.emit('header', this.header);
    }
  } else {
    // 从现在开始,仅需向我们的消费者提供数据。
    // 注意不要 push(null),因为它表明 EOF。
    var chunk = this._source.read();
    if (chunk) this.push(chunk);
  }
};

// 用法:
// var parser = new SimpleProtocol(source);
// 现在 parser 是一个会触发 'header' 事件并提供已解析
// 的头部的可读流。

new stream.Readable([options])#

  • options Object

    • highWaterMark Number The maximum number of bytes to store in the internal buffer before ceasing to read from the underlying resource. Default=16kb, or 16 for objectMode streams
    • encoding String If specified, then buffers will be decoded to strings using the specified encoding. Default=null
    • objectMode Boolean Whether this stream should behave as a stream of objects. Meaning that stream.read(n) returns a single value instead of a Buffer of size n
  • options Object

    • highWaterMark Number 停止从底层资源读取前内部缓冲区最多能存放的字节数。缺省为 16kb,对于 objectMode 流则是 16
    • encoding String 若给出,则 Buffer 会被解码成所给编码的字符串。缺省为 null
    • objectMode Boolean 该流是否应该表现为对象的流。意思是说 stream.read(n) 返回一个单独的对象,而不是大小为 n 的 Buffer

In classes that extend the Readable class, make sure to call the Readable constructor so that the buffering settings can be properly initialized.

请确保在扩充 Readable 类的类中调用 Readable 构造函数以便缓冲设定能被正确初始化。

readable._read(size)#

  • size Number Number of bytes to read asynchronously

  • size Number 异步读取的字节数

Note: Implement this function, but do NOT call it directly.

注意:实现这个函数,但【不要】直接调用它。

This function should NOT be called directly. It should be implemented by child classes, and only called by the internal Readable class methods.

这个函数【不应该】被直接调用。它应该被子类所实现,并仅被 Readable 类内部方法所调用。

All Readable stream implementations must provide a _read method to fetch data from the underlying resource.

所有 Readable 流的实现都必须提供一个 _read 方法来从底层资源抓取数据。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

When data is available, put it into the read queue by calling readable.push(chunk). If push returns false, then you should stop reading. When _read is called again, you should start pushing more data.

当数据可用时,调用 readable.push(chunk) 将它加入到读取队列。如果 push 返回 false,那么您应该停止读取。当 _read 被再次调用,您应该继续推出更多数据。

The size argument is advisory. Implementations where a "read" is a single call that returns data can use this to know how much data to fetch. Implementations where that is not relevant, such as TCP or TLS, may ignore this argument, and simply provide data whenever it becomes available. There is no need, for example to "wait" until size bytes are available before calling stream.push(chunk).

参数 size 仅作查询。“read”调用返回数据的实现可以通过这个参数来知道应当抓取多少数据;其余与之无关的实现,比如 TCP 或 TLS,则可忽略这个参数,并在可用时返回数据。例如,没有必要“等到” size 个字节可用时才调用 stream.push(chunk)

readable.push(chunk, [encoding])#

  • chunk Buffer | null | String Chunk of data to push into the read queue
  • encoding String Encoding of String chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'
  • return Boolean Whether or not more pushes should be performed

  • chunk Buffer | null | String 推入读取队列的数据块

  • encoding String 字符串块的编码。必须是有效的 Buffer 编码,比如 utf8ascii
  • 返回 Boolean 是否应该继续推入

Note: This function should be called by Readable implementors, NOT by consumers of Readable streams.

注意:这个函数应该被 Readable 实现者调用,【而不是】Readable 流的消费者。

The _read() function will not be called again until at least one push(chunk) call is made.

函数 _read() 不会被再次调用,直到至少调用了一次 push(chunk)

The Readable class works by putting data into a read queue to be pulled out later by calling the read() method when the 'readable' event fires.

Readable 类的工作方式是,将数据读入一个队列,当 'readable' 事件发生、调用 read() 方法时,数据会被从队列中取出。

The push() method will explicitly insert some data into the read queue. If it is called with null then it will signal the end of the data (EOF).

push() 方法会明确地向读取队列中插入一些数据。如果调用它时传入了 null 参数,那么它会触发数据结束信号(EOF)。

This API is designed to be as flexible as possible. For example, you may be wrapping a lower-level source which has some sort of pause/resume mechanism, and a data callback. In those cases, you could wrap the low-level source object by doing something like this:

这个 API 被设计成尽可能地灵活。比如说,您可以包装一个低级别的具备某种暂停/恢复机制和数据回调的数据源。这种情况下,您可以通过这种方式包装低级别来源对象:

// source 是一个带 readStop() 和 readStart() 方法的类,
// 以及一个当有数据时会被调用的 `ondata` 成员、一个
// 当数据结束时会被调用的 `onend` 成员。

util.inherits(SourceWrapper, Readable);

util.inherits(SourceWrapper, Readable);

function SourceWrapper(options) {
  Readable.call(this, options);

function SourceWrapper(options) {
  Readable.call(this, options);

  this._source = getLowlevelSourceObject();
  var self = this;

  this._source = getLowlevelSourceObject();
  var self = this;

  // Every time there's data, we push it into the internal buffer.
  this._source.ondata = function(chunk) {
    // if push() returns false, then we need to stop reading from source
    if (!self.push(chunk))
      self._source.readStop();
  };

  // 每当有数据时,我们将它推入到内部缓冲区中
  this._source.ondata = function(chunk) {
    // 如果 push() 返回 false,我们就需要暂停读取 source
    if (!self.push(chunk))
      self._source.readStop();
  };

  // When the source ends, we push the EOF-signalling `null` chunk
  this._source.onend = function() {
    self.push(null);
  };
}

  // 当来源结束时,我们 push 一个 `null` 块以表示 EOF
  this._source.onend = function() {
    self.push(null);
  };
}

// _read 会在流想要拉取更多数据时被调用
// 本例中忽略 size 参数
SourceWrapper.prototype._read = function(size) {
  this._source.readStart();
};

类: stream.Writable#

stream.Writable is an abstract class designed to be extended with an underlying implementation of the _write(chunk, encoding, callback) method.

stream.Writable 是一个可被扩充的、实现了底层方法 _write(chunk, encoding, callback) 的抽象类。

Please see above under API for Stream Consumers for how to consume writable streams in your programs. What follows is an explanation of how to implement Writable streams in your programs.

请阅读前文面向流消费者的 API 章节了解如何在您的程序中消费可读流。下文将解释如何在您的程序中自己实现 Writable 流。

new stream.Writable([options])#

  • options Object

    • highWaterMark Number Buffer level when write() starts returning false. Default=16kb, or 16 for objectMode streams
    • decodeStrings Boolean Whether or not to decode strings into Buffers before passing them to _write(). Default=true
  • options Object

    • highWaterMark Number write() 开始返回 false 的缓冲级别。缺省为 16kb,对于 objectMode 流则是 16
    • decodeStrings Boolean 是否在传递给 _write() 前将字符串解码成 Buffer。缺省为 true

In classes that extend the Writable class, make sure to call the constructor so that the buffering settings can be properly initialized.

请确保在扩充 Writable 类的类中调用构造函数以便缓冲设定能被正确初始化。

writable._write(chunk, encoding, callback)#

  • chunk Buffer | String The chunk to be written. Will always be a buffer unless the decodeStrings option was set to false.
  • encoding String If the chunk is a string, then this is the encoding type. Ignore if chunk is a buffer. Note that chunk will always be a buffer unless the decodeStrings option is explicitly set to false.
  • callback Function Call this function (optionally with an error argument) when you are done processing the supplied chunk.

  • chunk Buffer | String 要被写入的数据块。总会是一个 Buffer,除非 decodeStrings 选项被设定为 false

  • encoding String 如果数据块是字符串,则这里指定它的编码类型。如果数据块是 Buffer 则忽略此设定。请注意数据块总会是一个 Buffer,除非 decodeStrings 选项被明确设定为 false
  • callback Function 当您处理完所给数据块时调用此函数(可选地可附上一个错误参数)。

All Writable stream implementations must provide a _write() method to send data to the underlying resource.

所有 Writable 流的实现必须提供一个 _write() 方法来将数据发送到底层资源。

Note: This function MUST NOT be called directly. It should be implemented by child classes, and called by the internal Writable class methods only.

注意:该函数【禁止】被直接调用。它应该被子类所实现,并仅被 Writable 内部方法所调用。

Call the callback using the standard callback(error) pattern to signal that the write completed successfully or with an error.

使用标准的 callback(error) 形式来调用回调以表明写入成功完成或遇到错误。

If the decodeStrings flag is set in the constructor options, then chunk may be a string rather than a Buffer, and encoding will indicate the sort of string that it is. This is to support implementations that have an optimized handling for certain string data encodings. If you do not explicitly set the decodeStrings option to false, then you can safely ignore the encoding argument, and assume that chunk will always be a Buffer.

如果构造函数选项中设定了 decodeStrings 标志,则 chunk 可能会是字符串而不是 Buffer,并且 encoding 表明了字符串的格式。这种设计是为了支持对某些字符串数据编码提供优化处理的实现。如果您没有明确地将 decodeStrings 选项设定为 false,那么您可以安全地忽略 encoding 参数,并假定 chunk 总是一个 Buffer。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

writable._writev(chunks, callback)#

  • chunks Array The chunks to be written. Each chunk has following format: <span class="type"> chunk: ..., encoding: ... </span>.
  • callback Function Call this function (optionally with an error argument) when you are done processing the supplied chunks.

  • chunks Array 要写入的块。每个块都遵循这种格式:{ chunk: ..., encoding: ... }

  • callback Function 当您处理完所给数据块时调用此函数(可选地可附上一个错误参数)。

Note: This function MUST NOT be called directly. It may be implemented by child classes, and called by the internal Writable class methods only.

注意:该函数【禁止】被直接调用。它应该被子类所实现,并仅被 Writable 内部方法所调用。

This function is completely optional to implement. In most cases it is unnecessary. If implemented, it will be called with all the chunks that are buffered in the write queue.

该函数的实现完全是可选的,在大多数情况下都是不必要的。如果实现,它会被以所有滞留在写入队列中的数据块调用。

类: stream.Duplex#

A "duplex" stream is one that is both Readable and Writable, such as a TCP socket connection.

“双工”(duplex)流同时兼具可读和可写特性,比如一个 TCP 嵌套字连接。

Note that stream.Duplex is an abstract class designed to be extended with an underlying implementation of the _read(size) and _write(chunk, encoding, callback) methods as you would with a Readable or Writable stream class.

值得注意的是,stream.Duplex 是一个可以像 Readable 或 Writable 一样被扩充、实现了底层方法 _read(sise)_write(chunk, encoding, callback) 的抽象类。

Since JavaScript doesn't have multiple prototypal inheritance, this class prototypally inherits from Readable, and then parasitically from Writable. It is thus up to the user to implement both the lowlevel _read(n) method as well as the lowlevel _write(chunk, encoding, callback) method on extension duplex classes.

由于 JavaScript 并不具备多原型继承能力,这个类实际上继承自 Readable,并寄生自 Writable,从而让用户在双工类的扩充中能同时实现低级别的 _read(n) 方法和 _write(chunk, encoding, callback) 方法。

new stream.Duplex(options)#

  • options Object Passed to both Writable and Readable constructors. Also has the following fields:

    • allowHalfOpen Boolean Default=true. If set to false, then the stream will automatically end the readable side when the writable side ends and vice versa.
  • options Object Passed to both Writable and Readable constructors. Also has the following fields:

    • allowHalfOpen Boolean Default=true. If set to false, then the stream will automatically end the readable side when the writable side ends and vice versa.

In classes that extend the Duplex class, make sure to call the constructor so that the buffering settings can be properly initialized.

请确保在扩充 Duplex 类的类中调用构造函数以便缓冲设定能被正确初始化。

类: stream.Transform#

A "transform" stream is a duplex stream where the output is causally connected in some way to the input, such as a zlib stream or a crypto stream.

“转换”(transform)流实际上是一个输出与输入存在因果关系的双工流,比如 zlib 流或 crypto 流。

There is no requirement that the output be the same size as the input, the same number of chunks, or arrive at the same time. For example, a Hash stream will only ever have a single chunk of output which is provided when the input is ended. A zlib stream will produce output that is either much smaller or much larger than its input.

输入和输出并无要求相同大小、相同块数或同时到达。举个例子,一个 Hash 流只会在输入结束时产生一个数据块的输出;一个 zlib 流会产生比输入小得多或大得多的输出。

Rather than implement the _read() and _write() methods, Transform classes must implement the _transform() method, and may optionally also implement the _flush() method. (See below.)

转换类必须实现 _transform() 方法,而不是 _read()_write() 方法。可选的,也可以实现 _flush() 方法。(详见下文。)

new stream.Transform([options])#

  • options Object Passed to both Writable and Readable constructors.

  • options Object 传递给 Writable 和 Readable 构造函数。

In classes that extend the Transform class, make sure to call the constructor so that the buffering settings can be properly initialized.

请确保在扩充 Transform 类的类中调用了构造函数,以使得缓冲设定能被正确初始化。

transform._transform(chunk, encoding, callback)#

  • chunk Buffer | String The chunk to be transformed. Will always be a buffer unless the decodeStrings option was set to false.
  • encoding String If the chunk is a string, then this is the encoding type. (Ignore if decodeStrings chunk is a buffer.)
  • callback Function Call this function (optionally with an error argument) when you are done processing the supplied chunk.

  • chunk Buffer | String 要被转换的数据块。总是 Buffer,除非 decodeStrings 选项被设定为 false

  • encoding String 如果数据块是一个字符串,那么这就是它的编码类型。(数据块是 Buffer 则会忽略此参数。)
  • callback Function 当您处理完所提供的数据块时调用此函数(可选地附上一个错误参数)。

Note: This function MUST NOT be called directly. It should be implemented by child classes, and called by the internal Transform class methods only.

注意:该函数【禁止】被直接调用。它应该被子类所实现,并仅被 Transform 内部方法所调用。

All Transform stream implementations must provide a _transform method to accept input and produce output.

所有转换流的实现都必须提供一个 _transform 方法来接受输入并产生输出。

_transform should do whatever has to be done in this specific Transform class, to handle the bytes being written, and pass them off to the readable portion of the interface. Do asynchronous I/O, process things, and so on.

_transform 应当承担特定 Transform 类中所有处理被写入的字节、并将它们丢给接口的可写端的职责,进行异步 I/O,处理其它事情等等。

Call transform.push(outputChunk) 0 or more times to generate output from this input chunk, depending on how much data you want to output as a result of this chunk.

调用 transform.push(outputChunk) 0 或多次来从输入块生成输出,取决于您想从这个数据块输出多少数据。

Call the callback function only when the current chunk is completely consumed. Note that there may or may not be output as a result of any particular input chunk.

仅当当前数据块被完全消费时调用回调函数。注意,任何特定的输入块都有可能或可能不会产生输出。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

transform._flush(callback)#

  • callback Function Call this function (optionally with an error argument) when you are done flushing any remaining data.

  • callback Function 当您写入完毕剩下的数据后调用此函数(可选地可附上一个错误对象)。

Note: This function MUST NOT be called directly. It MAY be implemented by child classes, and if so, will be called by the internal Transform class methods only.

注意:该函数【禁止】被直接调用。它【可以】被子类所实现,并且如果实现,仅被 Transform 内部方法所调用。

In some cases, your transform operation may need to emit a bit more data at the end of the stream. For example, a Zlib compression stream will store up some internal state so that it can optimally compress the output. At the end, however, it needs to do the best it can with what is left, so that the data will be complete.

在一些情景中,您的转换操作可能需要在流的末尾多发生一点点数据。例如,一个 Zlib 压缩流会储存一些内部状态以便更好地压缩输出,但在最后它需要尽可能好地处理剩下的东西以使数据完整。

In those cases, you can implement a _flush method, which will be called at the very end, after all the written data is consumed, but before emitting end to signal the end of the readable side. Just like with _transform, call transform.push(chunk) zero or more times, as appropriate, and call callback when the flush operation is complete.

在这种情况中,您可以实现一个 _flush 方法,它会在最后被调用,在所有写入数据被消费、但在触发 end 表示可读端到达末尾之前。和 _transform 一样,只需在写入操作完成时适当地调用 transform.push(chunk) 零或多次。

This method is prefixed with an underscore because it is internal to the class that defines it, and should not be called directly by user programs. However, you are expected to override this method in your own extension classes.

该方法以下划线开头是因为它对于定义它的类是内部的,并且不应该被用户程序直接调用。但是,你应当在您的扩充类中覆盖这个方法。

例子: SimpleProtocol 解析器 v2#

The example above of a simple protocol parser can be implemented simply by using the higher level Transform stream class, similar to the parseHeader and SimpleProtocol v1 examples above.

上文的简易协议解析器例子能够很简单地使用高级别 Transform 流类实现,类似于前文 parseHeaderSimpleProtocal v1 示例。

In this example, rather than providing the input as an argument, it would be piped into the parser, which is a more idiomatic Node stream approach.

在这个示例中,输入会被导流到解析器中,而不是作为参数提供。这种做法更符合 Node 流的惯例。

var util = require('util');
var Transform = require('stream').Transform;
util.inherits(SimpleProtocol, Transform);

function SimpleProtocol(options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

function SimpleProtocol(options) {
  if (!(this instanceof SimpleProtocol))
    return new SimpleProtocol(options);

  Transform.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;
  this._rawHeader = [];
  this.header = null;
}

  Transform.call(this, options);
  this._inBody = false;
  this._sawFirstCr = false;
  this._rawHeader = [];
  this.header = null;
}

SimpleProtocol.prototype._transform = function(chunk, encoding, done) {
  if (!this._inBody) {
    // check if the chunk has a \n\n
    var split = -1;
    for (var i = 0; i < chunk.length; i++) {
      if (chunk[i] === 10) { // '\n'
        if (this._sawFirstCr) {
          split = i;
          break;
        } else {
          this._sawFirstCr = true;
        }
      } else {
        this._sawFirstCr = false;
      }
    }

SimpleProtocol.prototype._transform = function(chunk, encoding, done) {
  if (!this._inBody) {
    // 检查数据块是否有 \n\n
    var split = -1;
    for (var i = 0; i < chunk.length; i++) {
      if (chunk[i] === 10) { // '\n'
        if (this._sawFirstCr) {
          split = i;
          break;
        } else {
          this._sawFirstCr = true;
        }
      } else {
        this._sawFirstCr = false;
      }
    }

    if (split === -1) {
      // 仍旧等待 \n\n
      // 暂存数据块并重试。
      this._rawHeader.push(chunk);
    } else {
      this._inBody = true;
      var h = chunk.slice(0, split);
      this._rawHeader.push(h);
      var header = Buffer.concat(this._rawHeader).toString();
      try {
        this.header = JSON.parse(header);
      } catch (er) {
        this.emit('error', new Error('invalid simple protocol data'));
        return;
      }
      // 并让它们知道我们完成了头部解析。
      this.emit('header', this.header);

      // now, because we got some extra data, emit this first.
      this.push(chunk.slice(split));
    }
  } else {
    // from there on, just provide the data to our consumer as-is.
    this.push(chunk);
  }
  done();
};

      // 现在,由于我们获得了一些额外的数据,先触发这个。
      this.push(chunk.slice(split));
    }
  } else {
    // 之后,仅需向我们的消费者原样提供数据。
    this.push(chunk);
  }
  done();
};

// 用法:
// var parser = new SimpleProtocol();
// source.pipe(parser)
// 现在 parser 是一个会触发 'header' 并带上解析后的
// 头部数据的可读流。

类: stream.PassThrough#

This is a trivial implementation of a Transform stream that simply passes the input bytes across to the output. Its purpose is mainly for examples and testing, but there are occasionally use cases where it can come in handy as a building block for novel sorts of streams.

这是 Transform 流的一个简单实现,将输入的字节简单地传递给输出。它的主要用途是演示和测试,但偶尔要构建某种特殊流的时候也能派上用场。

流:内部细节#

缓冲#

Both Writable and Readable streams will buffer data on an internal object called _writableState.buffer or _readableState.buffer, respectively.

无论 Writable 或 Readable 流都会在内部分别叫做 _writableState.buffer_readableState.buffer 的对象中缓冲数据。

The amount of data that will potentially be buffered depends on the highWaterMark option which is passed into the constructor.

被缓冲的数据量取决于传递给构造函数的 highWaterMark(最高水位线)选项。

Buffering in Readable streams happens when the implementation calls stream.push(chunk). If the consumer of the Stream does not call stream.read(), then the data will sit in the internal queue until it is consumed.

Readable 流的滞留发生于当实现调用 stream.push(chunk) 的时候。如果流的消费者没有调用 stream.read(),那么数据将会一直待在内部队列,直到它被消费。

Buffering in Writable streams happens when the user calls stream.write(chunk) repeatedly, even when write() returns false.

Writable 流的滞留发生于当用户重复调用 stream.write(chunk) 即便此时 write() 返回 false 时。

The purpose of streams, especially with the pipe() method, is to limit the buffering of data to acceptable levels, so that sources and destinations of varying speed will not overwhelm the available memory.

流,尤其是 pipe() 方法的初衷,是将数据的滞留量限制到一个可接受的水平,以使得不同速度的来源和目标不会淹没可用内存。

stream.read(0)#

There are some cases where you want to trigger a refresh of the underlying readable stream mechanisms, without actually consuming any data. In that case, you can call stream.read(0), which will always return null.

在某写情景中,您可能需要触发底层可读流机制的刷新,但不真正消费任何数据。在这中情况下,您可以调用 stream.read(0),它总会返回 null

If the internal read buffer is below the highWaterMark, and the stream is not currently reading, then calling read(0) will trigger a low-level _read call.

如果内部读取缓冲低于 highWaterMark 水位线,并且流当前不在读取状态,那么调用 read(0) 会触发一个低级 _read 调用。

There is almost never a need to do this. However, you will see some cases in Node's internals where this is done, particularly in the Readable stream class internals.

虽然几乎没有必要这么做,但您可以在 Node 内部的某些地方看到它确实这么做了,尤其是在 Readable 流类的内部。

stream.push('')#

Pushing a zero-byte string or Buffer (when not in Object mode) has an interesting side effect. Because it is a call to stream.push(), it will end the reading process. However, it does not add any data to the readable buffer, so there's nothing for a user to consume.

推入一个零字节字符串或 Buffer(当不在 对象模式 时)有一个有趣的副作用。因为它是一个对 stream.push() 的调用,它会结束 reading 进程。然而,它没有添加任何数据到可读缓冲中,所以没有东西可以被用户消费。

Very rarely, there are cases where you have no data to provide now, but the consumer of your stream (or, perhaps, another bit of your own code) will know when to check again, by calling stream.read(0). In those cases, you may call stream.push('').

在极少数情况下,您当时没有数据提供,但您的流的消费者(或您的代码的其它部分)会通过调用 stream.read(0) 得知何时再次检查。在这中情况下,您可以调用 stream.push('')

So far, the only use case for this functionality is in the tls.CryptoStream class, which is deprecated in Node v0.12. If you find that you have to use stream.push(''), please consider another approach, because it almost certainly indicates that something is horribly wrong.

到目前为止,这个功能唯一一个使用情景是在 tls.CryptoStream 类中,但它将在 Node v0.12 中被废弃。如果您发现您不得不使用 stream.push(''),请考虑另一种方式,因为几乎可以明确表明这是某种可怕的错误。

与 Node 早期版本的兼容性#

In versions of Node prior to v0.10, the Readable stream interface was simpler, but also less powerful and less useful.

在 v0.10 之前版本的 Node 中,Readable 流的接口较为简单,同时功能和实用性也较弱。

  • Rather than waiting for you to call the read() method, 'data' events would start emitting immediately. If you needed to do some I/O to decide how to handle data, then you had to store the chunks in some kind of buffer so that they would not be lost.
  • The pause() method was advisory, rather than guaranteed. This meant that you still had to be prepared to receive 'data' events even when the stream was in a paused state.

  • 'data' 事件会开始立即开始发生,而不会等待您调用 read() 方法。如果您需要进行某些 I/O 来决定如何处理数据,那么您只能将数据块储存到某种缓冲区中以防它们流失。

  • pause() 方法仅起提议作用,而不保证生效。这意味着,即便当流处于暂停状态时,您仍然需要准备接收 'data' 事件。

In Node v0.10, the Readable class described below was added. For backwards compatibility with older Node programs, Readable streams switch into "flowing mode" when a 'data' event handler is added, or when the resume() method is called. The effect is that, even if you are not using the new read() method and 'readable' event, you no longer have to worry about losing 'data' chunks.

在 Node v0.10 中,下文所述的 Readable 类被加入进来。为了向后兼容考虑,Readable 流会在添加了 'data' 事件监听器、或 resume() 方法被调用时切换至“流动模式”。其作用是,即便您不使用新的 read() 方法和 'readable' 事件,您也不必担心丢失 'data' 数据块。

Most programs will continue to function normally. However, this introduces an edge case in the following conditions:

大多数程序会维持正常功能,然而,这也会在下列条件下引入一种边界情况:

  • No 'data' event handler is added.
  • The resume() method is never called.
  • The stream is not piped to any writable destination.

  • 没有添加 'data' 事件处理器。

  • resume() 方法从未被调用。
  • 流未被导流到任何可写目标。

For example, consider the following code:

举个例子,请留意下面代码:

// 警告!不能用!
net.createServer(function(socket) {

  // we add an 'end' method, but never consume the data
  socket.on('end', function() {
    // It will never get here.
    socket.end('I got your message (but didnt read it)\n');
  });

  // 我们添加了一个 'end' 事件,但从未消费数据
  socket.on('end', function() {
    // 它永远不会到达这里
    socket.end('我收到了您的来信(但我没看它)\n');
  });

}).listen(1337);

In versions of node prior to v0.10, the incoming message data would be simply discarded. However, in Node v0.10 and beyond, the socket will remain paused forever.

在 Node v0.10 之前的版本中,传入消息数据会被简单地丢弃。然而在 Node v0.10 及之后,socket 会一直保持暂停。

The workaround in this situation is to call the resume() method to start the flow of data:

对于这种情形的妥协方式是调用 resume() 方法来开启数据流:

// 妥协
net.createServer(function(socket) {

  socket.on('end', function() {
    socket.end('I got your message (but didnt read it)\n');
  });

  socket.on('end', function() {
    socket.end('我收到了您的来信(但我没看它)\n');
  });

  // start the flow of data, discarding it.
  socket.resume();

  // 开启数据流,并丢弃它们。
  socket.resume();

}).listen(1337);

In addition to new Readable streams switching into flowing mode, pre-v0.10 style streams can be wrapped in a Readable class using the wrap() method.

额外的,对于切换到流动模式的新 Readable 流,v0.10 之前风格的流可以通过 wrap() 方法被包装成 Readable 类。

对象模式#

Normally, Streams operate on Strings and Buffers exclusively.

通常情况下,流只操作字符串和 Buffer。

Streams that are in object mode can emit generic JavaScript values other than Buffers and Strings.

处于对象模式的流除了 Buffer 和字符串外还能读出普通的 JavaScript 值。

A Readable stream in object mode will always return a single item from a call to stream.read(size), regardless of what the size argument is.

一个处于对象模式的 Readable 流调用 stream.read(size) 时总会返回单个项目,无论传入什么 size 参数。

A Writable stream in object mode will always ignore the encoding argument to stream.write(data, encoding).

一个处于对象模式的 Writable 流总是会忽略传给 stream.write(data, encoding)encoding 参数。

The special value null still retains its special value for object mode streams. That is, for object mode readable streams, null as a return value from stream.read() indicates that there is no more data, and stream.push(null) will signal the end of stream data (EOF).

特殊值 null 在对象模式流中依旧保持它的特殊性。也就说,对于对象模式的可读流,stream.read() 返回 null 意味着没有更多数据,同时 stream.push(null) 会告知流数据到达末端(EOF)。

No streams in Node core are object mode streams. This pattern is only used by userland streaming libraries.

Node 核心不存在对象模式的流,这种设计只被某些用户态流式库所使用。

You should set objectMode in your stream child class constructor on the options object. Setting objectMode mid-stream is not safe.

您应该在您的流子类构造函数的选项对象中设置 objectMode。在流的过程中设置 objectMode 是不安全的。

状态对象#

Readable streams have a member object called _readableState. Writable streams have a member object called _writableState. Duplex streams have both.

Readable 流有一个成员对象叫作 _readableStateWritable 流有一个成员对象叫作 _writableStateDuplex 流二者兼备。

These objects should generally not be modified in child classes. However, if you have a Duplex or Transform stream that should be in objectMode on the readable side, and not in objectMode on the writable side, then you may do this in the constructor by setting the flag explicitly on the appropriate state object.

这些对象通常不应该被子类所更改。然而,如果您有一个 Duplex 或 Transform 流,它的可读端应该是 objectMode,但可写端却又不是 objectMode,那么您可以在构造函数里明确地设定合适的状态对象的标记来达到此目的。

var util = require('util');
var StringDecoder = require('string_decoder').StringDecoder;
var Transform = require('stream').Transform;
util.inherits(JSONParseStream, Transform);

// Gets \n-delimited JSON string data, and emits the parsed objects
function JSONParseStream(options) {
  if (!(this instanceof JSONParseStream))
    return new JSONParseStream(options);

// 获取以 \n 分隔的 JSON 字符串数据,并丢出解析后的对象
function JSONParseStream(options) {
  if (!(this instanceof JSONParseStream))
    return new JSONParseStream(options);

  Transform.call(this, options);
  this._writableState.objectMode = false;
  this._readableState.objectMode = true;
  this._buffer = '';
  this._decoder = new StringDecoder('utf8');
}

  Transform.call(this, options);
  this._writableState.objectMode = false;
  this._readableState.objectMode = true;
  this._buffer = '';
  this._decoder = new StringDecoder('utf8');
}

JSONParseStream.prototype._transform = function(chunk, encoding, cb) {
  this._buffer += this._decoder.write(chunk);
  // split on newlines
  var lines = this._buffer.split(/\r?\n/);
  // keep the last partial line buffered
  this._buffer = lines.pop();
  for (var l = 0; l < lines.length; l++) {
    var line = lines[l];
    try {
      var obj = JSON.parse(line);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // push the parsed object out to the readable consumer
    this.push(obj);
  }
  cb();
};

JSONParseStream.prototype._transform = function(chunk, encoding, cb) {
  this._buffer += this._decoder.write(chunk);
  // 以新行分割
  var lines = this._buffer.split(/\r?\n/);
  // 保留最后一行被缓冲
  this._buffer = lines.pop();
  for (var l = 0; l < lines.length; l++) {
    var line = lines[l];
    try {
      var obj = JSON.parse(line);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // 推出解析后的对象到可读消费者
    this.push(obj);
  }
  cb();
};

JSONParseStream.prototype._flush = function(cb) {
  // 仅仅处理剩下的东西
  var rem = this._buffer.trim();
  if (rem) {
    try {
      var obj = JSON.parse(rem);
    } catch (er) {
      this.emit('error', er);
      return;
    }
    // 推出解析后的对象到可读消费者
    this.push(obj);
  }
  cb();
};

The state objects contain other useful information for debugging the state of streams in your programs. It is safe to look at them, but beyond setting option flags in the constructor, it is not safe to modify them.

状态对象包含了其它调试您的程序的流的状态时有用的信息。读取它们是可以的,但越过构造函数的选项来更改它们是不安全的

加密(Crypto)#

稳定度: 2 - 不稳定;正在讨论未来版本的API变动。会尽量减少重大变动的发生。详见下文。

Use require('crypto') to access this module.

使用 require('crypto') 来调用该模块。

The crypto module offers a way of encapsulating secure credentials to be used as part of a secure HTTPS net or http connection.

crypto模块提供在HTTPS或HTTP连接中封装安全凭证的方法.

It also offers a set of wrappers for OpenSSL's hash, hmac, cipher, decipher, sign and verify methods.

它提供OpenSSL中的一系列哈希方法,包括hmac、cipher、decipher、签名和验证等方法的封装。

crypto.getCiphers()#

Returns an array with the names of the supported ciphers.

返回一个数组,包含支持的加密算法的名字。

Example:

示例:

var ciphers = crypto.getCiphers();
console.log(ciphers); // ['AES-128-CBC', 'AES-128-CBC-HMAC-SHA1', ...]

crypto.getHashes()#

Returns an array with the names of the supported hash algorithms.

返回一个包含所支持的哈希算法的数组。

Example:

示例:

var hashes = crypto.getHashes();
console.log(hashes); // ['sha', 'sha1', 'sha1WithRSAEncryption', ...]

crypto.createCredentials(details)#

Creates a credentials object, with the optional details being a dictionary with keys:

创建一个加密凭证对象,接受一个可选的参数对象:

  • pfx : A string or buffer holding the PFX or PKCS12 encoded private key, certificate and CA certificates
  • key : A string holding the PEM encoded private key
  • passphrase : A string of passphrase for the private key or pfx
  • cert : A string holding the PEM encoded certificate
  • ca : Either a string or list of strings of PEM encoded CA certificates to trust.
  • crl : Either a string or list of strings of PEM encoded CRLs (Certificate Revocation List)
  • ciphers: A string describing the ciphers to use or exclude. Consult http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT for details on the format.

  • pfx : 一个字符串或者buffer对象,代表经PFX或者PKCS12编码产生的私钥、证书以及CA证书

  • key : 一个字符串,代表经PEM编码产生的私钥
  • passphrase : 私钥或者pfx的密码
  • cert : 一个字符串,代表经PEM编码产生的证书
  • ca : 一个字符串或者字符串数组,表示可信任的经PEM编码产生的CA证书列表
  • crl : 一个字符串或者字符串数组,表示经PEM编码产生的CRL(证书吊销列表 Certificate Revocation List)
  • ciphers: 一个字符串,表示需要使用或者排除的加密算法 可以在 http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT 查看更多关于加密算法格式的资料。

If no 'ca' details are given, then node.js will use the default publicly trusted list of CAs as given in

http://mxr.mozilla.org/mozilla/source/security/nss/lib/ckfw/builtins/certdata.txt.

如果没有指定ca,node.js会使用http://mxr.mozilla.org/mozilla/source/security/nss/lib/ckfw/builtins/certdata.txt提供的公共可信任的CA列表。

crypto.createHash(algorithm)#

Creates and returns a hash object, a cryptographic hash with the given algorithm which can be used to generate hash digests.

创建并返回一个哈希对象,一个使用所给算法的用于生成摘要的加密哈希。

algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha1', 'md5', 'sha256', 'sha512', etc. On recent releases, openssl list-message-digest-algorithms will display the available digest algorithms.

algorithm 取决与平台上所安装的 OpenSSL 版本所支持的算法。比如 'sha1''md5''sha256''sha512' 等等。在最近的发行版本中,openssl list-message-digest-algorithms 会显示可用的摘要算法。

Example: this program that takes the sha1 sum of a file

例子:这段程序会计算出一个文件的 sha1 摘要值。

s.on('end', function() {
  var d = shasum.digest('hex');
  console.log(d + '  ' + filename);
});

类: Hash#

The class for creating hash digests of data.

创建数据哈希摘要的类。

It is a stream that is both readable and writable. The written data is used to compute the hash. Once the writable side of the stream is ended, use the read() method to get the computed hash digest. The legacy update and digest methods are also supported.

它是一个既可读又可写的。所写入的数据会被用作计算哈希。当流的可写端终止后,使用 read() 方法来获取计算得的哈希摘要。同时也支持旧有的 updatedigest 方法。

Returned by crypto.createHash.

通过 crypto.createHash 返回。

hash.update(data, [input_encoding])#

Updates the hash content with the given data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If no encoding is provided, then a buffer is expected.

通过提供的数据更新哈希对象,可以通过input_encoding指定编码为'utf8''ascii'或者 'binary'。如果没有指定编码,将作为二进制数据(buffer)处理。

This can be called many times with new data as it is streamed.

因为它是流式数据,所以可以使用不同的数据调用很多次。

hash.digest([encoding])#

Calculates the digest of all of the passed data to be hashed. The encoding can be 'hex', 'binary' or 'base64'. If no encoding is provided, then a buffer is returned.

计算传入的所有数据的摘要值。encoding可以是'hex''binary'或者'base64',如果没有指定,会返回一个buffer对象。

Note: hash object can not be used after digest() method has been called.

注意:hash 对象在 digest() 方法被调用后将不可用。

crypto.createHmac(algorithm, key)#

Creates and returns a hmac object, a cryptographic hmac with the given algorithm and key.

创建并返回一个hmac对象,也就是通过给定的加密算法和密钥生成的加密图谱(cryptographic)。

It is a stream that is both readable and writable. The written data is used to compute the hmac. Once the writable side of the stream is ended, use the read() method to get the computed digest. The legacy update and digest methods are also supported.

它是一个既可读又可写的流(stream)。写入的数据会被用于计算hmac。写入终止后,可以使用read()方法获取计算后的摘要值。之前版本的updatedigest方法仍然支持。

algorithm is dependent on the available algorithms supported by OpenSSL - see createHash above. key is the hmac key to be used.

algorithm在OpenSSL支持的算法列表中被抛弃了——见上方createHash部分。key是hmac算法用到的密钥。

Class: Hmac#

Class for creating cryptographic hmac content.

用于创建hmac加密图谱(cryptographic)的类。

Returned by crypto.createHmac.

crypto.createHmac返回。

hmac.update(data)#

Update the hmac content with the given data. This can be called many times with new data as it is streamed.

通过提供的数据更新hmac对象。因为它是流式数据,所以可以使用新数据调用很多次。

hmac.digest([encoding])#

Calculates the digest of all of the passed data to the hmac. The encoding can be 'hex', 'binary' or 'base64'. If no encoding is provided, then a buffer is returned.

计算传入的所有数据的hmac摘要值。encoding可以是'hex''binary'或者'base64',如果没有指定,会返回一个buffer对象。

Note: hmac object can not be used after digest() method has been called.

注意: hmac对象在调用digest()之后就不再可用了。

crypto.createCipher(algorithm, password)#

Creates and returns a cipher object, with the given algorithm and password.

用给定的算法和密码,创建并返回一个cipher加密算法的对象。(译者:cipher 就是加密算法的意思, ssl 的 cipher 主要是对称加密算法和不对称加密算法的组合。)

algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent releases, openssl list-cipher-algorithms will display the available cipher algorithms. password is used to derive key and IV, which must be a 'binary' encoded string or a buffer.

algorithm算法是依赖OpenSSL库的, 例如: 'aes192'算法等。在最近发布的版本, 执行命令 openssl list-cipher-algorithms 就会显示出所有可用的加密算法,password是用来派生key和IV的,它必须是一个 'binary' 2进制格式的字符串或者是一个buffer。(译者:key表示密钥,IV表示向量在加密过程和解密过程都要使用)

It is a stream that is both readable and writable. The written data is used to compute the hash. Once the writable side of the stream is ended, use the read() method to get the computed hash digest. The legacy update and digest methods are also supported.

它是一个既可读又可写的。所写入的数据会被用作计算哈希。当流的可写端终止后,使用 read() 方法来获取计算得的哈希摘要。同时也支持旧有的 updatedigest 方法。

crypto.createCipheriv(algorithm, key, iv)#

Creates and returns a cipher object, with the given algorithm, key and iv.

用给定的算法、密码和向量,创建并返回一个cipher加密算法的对象。

algorithm is the same as the argument to createCipher(). key is the raw key used by the algorithm. iv is an initialization vector.

algorithm算法和createCipher() 方法的参数相同. key密钥是一个被算法使用的原始密钥,iv是一个初始化向量

key and iv must be 'binary' encoded strings or buffers.

key密钥和iv向量必须是'binary'2进制格式的字符串或buffers.

Class: Cipher#

Class for encrypting data.

这个类是用来加密数据的。

Returned by crypto.createCipher and crypto.createCipheriv.

这个类由 crypto.createCiphercrypto.createCipheriv 返回。

Cipher objects are streams that are both readable and writable. The written plain text data is used to produce the encrypted data on the readable side. The legacy update and final methods are also supported.

Cipher加密对象是 streams,他是具有 readable 可读和 writable 可写的。写入的纯文本数据是用来在可读流一侧加密数据的。 以前版本的updatefinal方法也还是支持的。

cipher.update(data, [input_encoding], [output_encoding])#

Updates the cipher with data, the encoding of which is given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If no encoding is provided, then a buffer is expected.

data参数更新cipher加密对象, 它的编码input_encoding必须是下列给定编码的 'utf8', 'ascii' or 'binary' 中一种。如果没有编码参数,那么打他参数必须是一个buffer。

The output_encoding specifies the output format of the enciphered data, and can be 'binary', 'base64' or 'hex'. If no encoding is provided, then a buffer is returned.

参数 output_encoding输出编码指定了加密数据的输出格式,可以是'binary', 'base64' 或者'hex',如果没有提供这个参数,buffer将会返回。

Returns the enciphered contents, and can be called many times with new data as it is streamed.

返回加密内容,并且Returns the enciphered contents, 用新数据作为流的话,它可以被调用多次。

cipher.final([output_encoding])#

Returns any remaining enciphered contents, with output_encoding being one of: 'binary', 'base64' or 'hex'. If no encoding is provided, then a buffer is returned.

返回剩余的加密内容,output_encoding'binary', 'base64''hex'中的任意一个。 如果没有提供编码格式,则返回一个buffer对象。

Note: cipher object can not be used after final() method has been called.

注: 调用final()函数后cipher 对象不能被使用。

cipher.setAutoPadding(auto_padding=true)#

You can disable automatic padding of the input data to block size. If auto_padding is false, the length of the entire input data must be a multiple of the cipher's block size or final will fail. Useful for non-standard padding, e.g. using 0x0 instead of PKCS padding. You must call this before cipher.final.

对于将输入数据自动填充到块大小的功能,你可以将其禁用。如果auto_padding是false, 那么整个输入数据的长度必须是加密器的块大小的整倍数,否则final会失败。这对非标准的填充很有用,例如使用0x0而不是PKCS的填充。这个函数必须在cipher.final之前调用。

crypto.createDecipher(algorithm, password)#

Creates and returns a decipher object, with the given algorithm and key. This is the mirror of the createCipher() above.

根据给定的算法和密钥,创建并返回一个解密器对象。这是上述createCipher()的一个镜像。

crypto.createDecipheriv(algorithm, key, iv)#

Creates and returns a decipher object, with the given algorithm, key and iv. This is the mirror of the createCipheriv() above.

Creates and returns a decipher object, with the given algorithm, key and iv. This is the mirror of the createCipheriv() above. 根据给定的算法,密钥和初始化向量,创建并返回一个解密器对象。这是上述createCipheriv()的一个镜像。

Class: Decipher#

Class for decrypting data.

解密数据的类。

Returned by crypto.createDecipher and crypto.createDecipheriv.

crypto.createDeciphercrypto.createDecipheriv返回。

Decipher objects are streams that are both readable and writable. The written enciphered data is used to produce the plain-text data on the the readable side. The legacy update and final methods are also supported.

解密器对象是可读写的对象。用被写入的加密数据生成可读的平文数据。解码器对象也支持The legacy updatefinal函数。

decipher.update(data, [input_encoding], [output_encoding])#

Updates the decipher with data, which is encoded in 'binary', 'base64' or 'hex'. If no encoding is provided, then a buffer is expected.

data来更新解密器,其中data'binary', 'base64''hex'进行编码。如果没有指明编码方式,则默认data是一个buffer对象。

The output_decoding specifies in what format to return the deciphered plaintext: 'binary', 'ascii' or 'utf8'. If no encoding is provided, then a buffer is returned.

output_decoding指明了用以下哪种编码方式返回解密后的平文:'binary', 'ascii''utf8'。如果没有指明编码方式,则返回一个buffer对象。

decipher.final([output_encoding])#

Returns any remaining plaintext which is deciphered, with output_encoding being one of: 'binary', 'ascii' or 'utf8'. If no encoding is provided, then a buffer is returned.

返回剩余的加密内容,output_encoding'binary', 'ascii''utf8'中的任意一个。如果没有指明编码方式,则返回一个buffer对象。

Note: decipher object can not be used after final() method has been called.

注: 调用final()函数后不能使用decipher 对象。

decipher.setAutoPadding(auto_padding=true)#

You can disable auto padding if the data has been encrypted without standard block padding to prevent decipher.final from checking and removing it. Can only work if the input data's length is a multiple of the ciphers block size. You must call this before streaming data to decipher.update.

如果数据以非标准的块填充方式被加密,那么你可以禁用自动填充来防止decipher.final对数据进行检查和移除。这只有在输入数据的长度是加密器块大小的整倍数时才有效。这个函数必须在将数据流传递给decipher.update之前调用。

crypto.createSign(algorithm)#

Creates and returns a signing object, with the given algorithm. On recent OpenSSL releases, openssl list-public-key-algorithms will display the available signing algorithms. Examples are 'RSA-SHA256'.

根据给定的算法,创建并返回一个signing对象。在最近的OpenSSL发布版本中,openssl list-public-key-algorithms会列出可用的签名算法,例如'RSA-SHA256'

Class: Sign#

Class for generating signatures.

生成数字签名的类

Returned by crypto.createSign.

crypto.createSign返回。

Sign objects are writable streams. The written data is used to generate the signature. Once all of the data has been written, the sign method will return the signature. The legacy update method is also supported.

Sign对象是可写的对象。被写入的数据用来生成数字签名。当所有的数据都被写入后,sign 函数会返回数字签名。Sign对象也支持The legacy update函数。

sign.update(data)#

Updates the sign object with data. This can be called many times with new data as it is streamed.

data来更新sign对象。 This can be called many times with new data as it is streamed.

sign.sign(private_key, [output_format])#

Calculates the signature on all the updated data passed through the sign. private_key is a string containing the PEM encoded private key for signing.

根据所有传送给sign的更新数据来计算电子签名。private_key是一个包含了签名私钥的字符串,而该私钥是用PEM编码的。

Returns the signature in output_format which can be 'binary', 'hex' or 'base64'. If no encoding is provided, then a buffer is returned.

返回一个数字签名,该签名的格式可以是'binary', 'hex''base64'. 如果没有指明编码方式,则返回一个buffer对象。

Note: sign object can not be used after sign() method has been called.

注:调用sign()后不能使用sign对象。

crypto.createVerify(algorithm)#

Creates and returns a verification object, with the given algorithm. This is the mirror of the signing object above.

根据指明的算法,创建并返回一个验证器对象。这是上述签名器对象的镜像。

Class: Verify#

Class for verifying signatures.

用来验证数字签名的类。

Returned by crypto.createVerify.

crypto.createVerify返回。

Verify objects are writable streams. The written data is used to validate against the supplied signature. Once all of the data has been written, the verify method will return true if the supplied signature is valid. The legacy update method is also supported.

验证器对象是可写的对象. 被写入的数据会被用来验证提供的数字签名。在所有的数据被写入后,如果提供的数字签名有效,verify函数会返回真。验证器对象也支持 The legacy update函数。

verifier.update(data)#

Updates the verifier object with data. This can be called many times with new data as it is streamed.

用数据更新验证器对象。This can be called many times with new data as it is streamed.

verifier.verify(object, signature, [signature_format])#

Verifies the signed data by using the object and signature. object is a string containing a PEM encoded object, which can be one of RSA public key, DSA public key, or X.509 certificate. signature is the previously calculated signature for the data, in the signature_format which can be 'binary', 'hex' or 'base64'. If no encoding is specified, then a buffer is expected.

objectsignature来验证被签名的数据。 object是一个字符串,这个字符串包含了一个被PEM编码的对象,这个对象可以是RSA公钥,DSA公钥或者X.509 证书。 signature是之前计算出来的数字签名,其中的 signature_format可以是'binary', 'hex''base64'. 如果没有指明编码方式,那么默认是一个buffer对象。

Returns true or false depending on the validity of the signature for the data and public key.

根据数字签名对于数据和公钥的有效性,返回true或false。

Note: verifier object can not be used after verify() method has been called.

注: 调用verify()函数后不能使用verifier对象。

crypto.createDiffieHellman(prime_length)#

Creates a Diffie-Hellman key exchange object and generates a prime of the given bit length. The generator used is 2.

创建一个迪菲-赫尔曼密钥交换(Diffie-Hellman key exchange)对象,并根据给定的位长度生成一个质数。所用的生成器是s

crypto.createDiffieHellman(prime, [encoding])#

Creates a Diffie-Hellman key exchange object using the supplied prime. The generator used is 2. Encoding can be 'binary', 'hex', or 'base64'. If no encoding is specified, then a buffer is expected.

根据给定的质数创建一个迪菲-赫尔曼密钥交换(Diffie-Hellman key exchange)对象。 所用的生成器是2。编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则默认是一个buffer对象。

Class: DiffieHellman#

The class for creating Diffie-Hellman key exchanges.

创建迪菲-赫尔曼密钥交换(Diffie-Hellman key exchanges)的类。

Returned by crypto.createDiffieHellman.

crypto.createDiffieHellman返回。

diffieHellman.generateKeys([encoding])#

Generates private and public Diffie-Hellman key values, and returns the public key in the specified encoding. This key should be transferred to the other party. Encoding can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

生成迪菲-赫尔曼(Diffie-Hellman)算法的公钥和私钥,并根据指明的编码方式返回公钥。这个公钥可以转交给第三方。编码方式可以是 'binary', 'hex''base64'. 如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.computeSecret(other_public_key, [input_encoding], [output_encoding])#

Computes the shared secret using other_public_key as the other party's public key and returns the computed shared secret. Supplied key is interpreted using specified input_encoding, and secret is encoded using specified output_encoding. Encodings can be 'binary', 'hex', or 'base64'. If the input encoding is not provided, then a buffer is expected.

other_public_key作为第三方公钥来计算共享秘密,并返回这个共享秘密。参数中的密钥会以input_encoding编码方式来解读,而共享密钥则会用output_encoding进行编码。编码方式可以是'binary', 'hex''base64'。如果没有提供输入的编码方式,则默认为一个buffer对象。

If no output encoding is given, then a buffer is returned.

如果没有指明输出的编码方式,则返回一个buffer对象。

diffieHellman.getPrime([encoding])#

Returns the Diffie-Hellman prime in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)质数,其中编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.getGenerator([encoding])#

Returns the Diffie-Hellman prime in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)质数,其中编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.getPublicKey([encoding])#

Returns the Diffie-Hellman public key in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)公钥,其中编码方式可以是'binary', 'hex''base64'。 如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.getPrivateKey([encoding])#

Returns the Diffie-Hellman private key in the specified encoding, which can be 'binary', 'hex', or 'base64'. If no encoding is provided, then a buffer is returned.

根据指明的编码格式返回迪菲-赫尔曼(Diffie-Hellman)私钥,其中编码方式可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.setPublicKey(public_key, [encoding])#

Sets the Diffie-Hellman public key. Key encoding can be 'binary', 'hex' or 'base64'. If no encoding is provided, then a buffer is expected.

设置迪菲-赫尔曼(Diffie-Hellman)公钥,编码方式可以是可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

diffieHellman.setPrivateKey(private_key, [encoding])#

Sets the Diffie-Hellman private key. Key encoding can be 'binary', 'hex' or 'base64'. If no encoding is provided, then a buffer is expected.

设置迪菲-赫尔曼(Diffie-Hellman)私钥,编码方式可以是可以是'binary', 'hex''base64'。如果没有指明编码方式,则返回一个buffer对象。

crypto.getDiffieHellman(group_name)#

Creates a predefined Diffie-Hellman key exchange object. The supported groups are: 'modp1', 'modp2', 'modp5' (defined in RFC 2412) and 'modp14', 'modp15', 'modp16', 'modp17', 'modp18' (defined in RFC 3526). The returned object mimics the interface of objects created by crypto.createDiffieHellman() above, but will not allow to change the keys (with diffieHellman.setPublicKey() for example). The advantage of using this routine is that the parties don't have to generate nor exchange group modulus beforehand, saving both processor and communication time.

创建一个预定义的迪菲-赫尔曼密钥交换(Diffie-Hellman key exchanges)对象。支持以下的D-H组:'modp1', 'modp2', 'modp5' (在RFC 2412中定义) 和 'modp14', 'modp15', 'modp16', 'modp17', 'modp18' (在 RFC 3526中定义)。返回的对象模仿了上述 crypto.createDiffieHellman()方法所创建的对象的接口,但不会晕允许密钥交换 (例如像 diffieHellman.setPublicKey()那样)。执行这套流程的好处是双方不需要事先生成或交换组余数,节省了处理和通信时间。

Example (obtaining a shared secret):

例子 (获取一个共享秘密):

/* alice_secret和 bob_secret应该是一样的 */
console.log(alice_secret == bob_secret);

crypto.pbkdf2(password, salt, iterations, keylen, callback)#

Asynchronous PBKDF2 applies pseudorandom function HMAC-SHA1 to derive a key of given length from the given password, salt and iterations. The callback gets two arguments (err, derivedKey).

异步PBKDF2提供了一个伪随机函数 HMAC-SHA1,根据给定密码的长度,salt和iterations来得出一个密钥。回调函数得到两个参数 (err, derivedKey)

crypto.pbkdf2Sync(password, salt, iterations, keylen)#

Synchronous PBKDF2 function. Returns derivedKey or throws error.

同步 PBKDF2 函数。返回derivedKey或抛出一个错误。

crypto.randomBytes(size, [callback])#

Generates cryptographically strong pseudo-random data. Usage:

生成密码学强度的伪随机数据。用法:

// 同步
try {
  var buf = crypto.randomBytes(256);
  console.log('有 %d 字节的随机数据: %s', buf.length, buf);
} catch (ex) {
  // handle error
}

crypto.pseudoRandomBytes(size, [callback])#

Generates non-cryptographically strong pseudo-random data. The data returned will be unique if it is sufficiently long, but is not necessarily unpredictable. For this reason, the output of this function should never be used where unpredictability is important, such as in the generation of encryption keys.

生成密码学强度的伪随机数据。如果数据足够长的话会返回一个唯一的数据,但这个返回值不一定是不可预料的。基于这个原因,当不可预料性很重要时,这个函数的返回值永远都不应该被使用,例如在生成加密的密钥时。

Usage is otherwise identical to crypto.randomBytes.

用法与 crypto.randomBytes一模一样。

crypto.DEFAULT_ENCODING#

The default encoding to use for functions that can take either strings or buffers. The default value is 'buffer', which makes it default to using Buffer objects. This is here to make the crypto module more easily compatible with legacy programs that expected 'binary' to be the default encoding.

对于可以接受字符串或buffer对象的函数的默认编码方式。默认值是'buffer',所以默认使用Buffer对象。这是为了让crypto模块与默认'binary'为编码方式的遗留程序更容易兼容。

Note that new programs will probably expect buffers, so only use this as a temporary measure.

要注意,新的程序会期待buffer对象,所以使用这个时请只作为暂时的手段。

Recent API Changes#

The Crypto module was added to Node before there was the concept of a unified Stream API, and before there were Buffer objects for handling binary data.

早在统一的流API概念出现,以及引入Buffer对象来处理二进制数据之前,Crypto模块就被添加到Node。

As such, the streaming classes don't have the typical methods found on other Node classes, and many methods accepted and returned Binary-encoded strings by default rather than Buffers. This was changed to use Buffers by default instead.

因为这样,与流有关的类中并没有其它Node类的典型函数,而且很多函数接受和返回默认的二进制编码的字符串,而不是Buffer对象。在最近的修改中,这些函数都被改成默认使用Buffer对象。

This is a breaking change for some use cases, but not all.

这对于某些(但不是全部)使用场景来讲是重大的改变。

For example, if you currently use the default arguments to the Sign class, and then pass the results to the Verify class, without ever inspecting the data, then it will continue to work as before. Where you once got a binary string and then presented the binary string to the Verify object, you'll now get a Buffer, and present the Buffer to the Verify object.

例如,如果你现在使用Sign类的默认参数,然后在没有检查数据的情况下,将结果传递给Verify类,那么程序会照常工作。在以前,你会拿到一个二进制字符串,然后它传递给Verify对象;而现在,你会得到一个Buffer对象,然后把它传递给Verify对象。

However, if you were doing things with the string data that will not work properly on Buffers (such as, concatenating them, storing in databases, etc.), or you are passing binary strings to the crypto functions without an encoding argument, then you will need to start providing encoding arguments to specify which encoding you'd like to use. To switch to the previous style of using binary strings by default, set the crypto.DEFAULT_ENCODING field to 'binary'. Note that new programs will probably expect buffers, so only use this as a temporary measure.

但是,如果你以前是使用那些在Buffer对象上不能正常工作的字符串数据,或者以默认编码方式将二进制数据传递给加密函数的话,那你就要开始提供编码方式参数来指明你想使用的编码方式了。如果想准换回旧的风格默认使用二进制字符串,那么你需要把crypto.DEFAULT_ENCODING字段设为'binary'。但请注意,因为新的程序很可能会期望buffer对象,所以仅将此当做临时手段。

TLS (SSL)#

稳定度: 3 - 稳定

Use require('tls') to access this module.

使用 require('tls') 来访问此模块。

The tls module uses OpenSSL to provide Transport Layer Security and/or Secure Socket Layer: encrypted stream communication.

tls 模块使用 OpenSSL 来提供传输层安全协议(Transport Layer Security)和/或安全套接层(Secure Socket Layer):加密过的流通讯。

TLS/SSL is a public/private key infrastructure. Each client and each server must have a private key. A private key is created like this

TLS/SSL 是一种公钥/私钥架构。每个客户端和服务器都必有一个私钥。一个私钥使用类似的方式创建:

openssl genrsa -out ryans-key.pem 1024

All severs and some clients need to have a certificate. Certificates are public keys signed by a Certificate Authority or self-signed. The first step to getting a certificate is to create a "Certificate Signing Request" (CSR) file. This is done with:

所有服务器和某些客户端需要具备证书。证书是证书办法机构签发或自签发的公钥。获取证书的第一步是创建一个“证书签发申请”(CSR)文件。使用这条命令完成:

openssl req -new -key ryans-key.pem -out ryans-csr.pem

To create a self-signed certificate with the CSR, do this:

像这样使用 CSR 创建一个自签名证书:

openssl x509 -req -in ryans-csr.pem -signkey ryans-key.pem -out ryans-cert.pem

Alternatively you can send the CSR to a Certificate Authority for signing.

又或者你可以将 CSR 发送给一个数字证书认证机构请求签名。

(TODO: docs on creating a CA, for now interested users should just look at test/fixtures/keys/Makefile in the Node source code)

To create .pfx or .p12, do this:

像这样创建 .pfx 或 .p12:

openssl pkcs12 -export -in agent5-cert.pem -inkey agent5-key.pem \
    -certfile ca-cert.pem -out agent5.pfx
  • in: certificate
  • inkey: private key
  • certfile: all CA certs concatenated in one file like cat ca1-cert.pem ca2-cert.pem > ca-cert.pem

  • in: certificate

  • inkey: private key
  • certfile: all CA certs concatenated in one file like cat ca1-cert.pem ca2-cert.pem > ca-cert.pem

Client-initiated renegotiation attack mitigation#

The TLS protocol lets the client renegotiate certain aspects of the TLS session. Unfortunately, session renegotiation requires a disproportional amount of server-side resources, which makes it a potential vector for denial-of-service attacks.

TLS协议会令客户端可以重新协商TLS会话的某些方面。但是,会话的重新协商是需要相应量的服务器端资源的,所以导致其变成一个阻断服务攻击(denial-of-service)的潜在媒介。

To mitigate this, renegotiations are limited to three times every 10 minutes. An error is emitted on the tls.TLSSocket instance when the threshold is exceeded. The limits are configurable:

为了减低这种情况的发生,重新协商被限制在每10分钟三次。如果超过这个数目,那么在tls.TLSSocket实例上就会分发一个错误。这个限制是可设置的:

  • tls.CLIENT_RENEG_LIMIT: renegotiation limit, default is 3.

  • tls.CLIENT_RENEG_LIMIT: 重新协商的次数限制,默认为3。

  • tls.CLIENT_RENEG_WINDOW: renegotiation window in seconds, default is 10 minutes.

  • tls.CLIENT_RENEG_WINDOW: 重新协商窗口的秒数,默认为600(10分钟)。

Don't change the defaults unless you know what you are doing.

除非你完全理解整个机制和清楚自己要干什么,否则不要改变这个默认值。

To test your server, connect to it with openssl s_client -connect address:port and tap R<CR> (that's the letter R followed by a carriage return) a few times.

要测试你的服务器的话,用命令 openssl s_client -connect 地址:端口连接上服务器,然后敲击R<CR>(字母键R加回车键)几次。

NPN 和 SNI#

NPN (Next Protocol Negotiation) and SNI (Server Name Indication) are TLS handshake extensions allowing you:

  • NPN - to use one TLS server for multiple protocols (HTTP, SPDY)
  • SNI - to use one TLS server for multiple hostnames with different SSL certificates.

tls.getCiphers()#

Returns an array with the names of the supported SSL ciphers.

返回一个数组,其中包含了所支持的SSL加密器的名字。

Example:

示例:

var ciphers = tls.getCiphers();
console.log(ciphers); // ['AES128-SHA', 'AES256-SHA', ...]

tls.createServer(options, [secureConnectionListener])#

Creates a new tls.Server. The connectionListener argument is automatically set as a listener for the secureConnection event. The options object has these possibilities:

新建一个新的 tls.Server. The connectionListener 参数会自动设置为 secureConnection 事件的监听器. 这个 options 对象有这些可能性:

  • pfx: A string or Buffer containing the private key, certificate and CA certs of the server in PFX or PKCS12 format. (Mutually exclusive with the key, cert and ca options.)

  • pfx: 一个String 或Buffer包含了私钥, 证书和CA certs, 一般是 PFX 或者 PKCS12 格式. (Mutually exclusive with the key, cert and ca options.)

  • key: A string or Buffer containing the private key of the server in PEM format. (Required)

  • key: 一个字符串或 Buffer对象,其中包含了PEF格式的服务器的私钥。 (必需)

  • passphrase: A string of passphrase for the private key or pfx.

  • passphrase: 私钥或pfx密码的字符串。

  • cert: A string or Buffer containing the certificate key of the server in PEM format. (Required)

  • ca: An array of strings or Buffers of trusted certificates. If this is omitted several well known "root" CAs will be used, like VeriSign. These are used to authorize connections.

  • crl : Either a string or list of strings of PEM encoded CRLs (Certificate Revocation List)

  • ciphers: A string describing the ciphers to use or exclude.

    NOTE: Previous revisions of this section suggested AES256-SHA as an acceptable cipher. Unfortunately, AES256-SHA is a CBC cipher and therefore susceptible to BEAST attacks. Do not use it.

  • handshakeTimeout: Abort the connection if the SSL/TLS handshake does not finish in this many milliseconds. The default is 120 seconds.

    A 'clientError' is emitted on the tls.Server object whenever a handshake times out.

  • honorCipherOrder : When choosing a cipher, use the server's preferences instead of the client preferences.

    Although, this option is disabled by default, it is recommended that you use this option in conjunction with the ciphers option to mitigate BEAST attacks.

  • requestCert: If true the server will request a certificate from clients that connect and attempt to verify that certificate. Default: false.

  • rejectUnauthorized: If true the server will reject any connection which is not authorized with the list of supplied CAs. This option only has an effect if requestCert is true. Default: false.

  • NPNProtocols: An array or Buffer of possible NPN protocols. (Protocols should be ordered by their priority).

  • SNICallback(servername, cb): A function that will be called if client supports SNI TLS extension. Two argument will be passed to it: servername, and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (You can use crypto.createCredentials(...).context to get proper SecureContext). If SNICallback wasn't provided - default callback with high-level API will be used (see below).

  • sessionTimeout: An integer specifying the seconds after which TLS session identifiers and TLS session tickets created by the server are timed out. See SSL_CTX_set_timeout for more details.

  • sessionIdContext: A string containing a opaque identifier for session resumption. If requestCert is true, the default is MD5 hash value generated from command-line. Otherwise, the default is not provided.

  • secureProtocol: The SSL method to use, e.g. SSLv3_method to force SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant SSL_METHODS.

Here is a simple example echo server:

这是一个简单的应答服务器例子:

var server = tls.createServer(options, function(socket) {
  console.log('服务器已连接',
              socket.authorized ? '已授权' : '未授权');
  socket.write("欢迎!\n");
  socket.setEncoding('utf8');
  socket.pipe(socket);
});
server.listen(8000, function() {
  console.log('server bound');
});

Or

或者

};

var server = tls.createServer(options, function(socket) {
  console.log('server connected',
              socket.authorized ? 'authorized' : 'unauthorized');
  socket.write("welcome!\n");
  socket.setEncoding('utf8');
  socket.pipe(socket);
});
server.listen(8000, function() {
  console.log('server bound');
});

You can test this server by connecting to it with openssl s_client:

var server = tls.createServer(options, function(socket) {
  console.log('服务器已连接',
              socket.authorized ? '已授权' : '未授权');
  socket.write("欢迎!\n");
  socket.setEncoding('utf8');
  socket.pipe(socket);
});
server.listen(8000, function() {
  console.log('服务器已绑定');
});

您可以使用 openssl s_client 连接这个服务器来测试:

openssl s_client -connect 127.0.0.1:8000

tls.connect(options, [callback])#

tls.connect(port, [host], [options], [callback])#

tls.connect(options, [callback])#

tls.connect(port, [host], [options], [callback])#

Creates a new client connection to the given port and host (old API) or options.port and options.host. (If host is omitted, it defaults to localhost.) options should be an object which specifies:

  • host: Host the client should connect to

  • port: Port the client should connect to

  • socket: Establish secure connection on a given socket rather than creating a new socket. If this option is specified, host and port are ignored.

  • pfx: A string or Buffer containing the private key, certificate and CA certs of the server in PFX or PKCS12 format.

  • key: A string or Buffer containing the private key of the client in PEM format.

  • passphrase: A string of passphrase for the private key or pfx.

  • passphrase: 私钥或pfx密码的字符串。

  • cert: A string or Buffer containing the certificate key of the client in PEM format.

  • ca: An array of strings or Buffers of trusted certificates. If this is omitted several well known "root" CAs will be used, like VeriSign. These are used to authorize connections.

  • rejectUnauthorized: If true, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails. Default: true.

  • NPNProtocols: An array of string or Buffer containing supported NPN protocols. Buffer should have following format: 0x05hello0x05world, where first byte is next protocol name's length. (Passing array should usually be much simpler: ['hello', 'world'].)

  • servername: Servername for SNI (Server Name Indication) TLS extension.

  • secureProtocol: The SSL method to use, e.g. SSLv3_method to force SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant SSL_METHODS.

The callback parameter will be added as a listener for the 'secureConnect' event.

callback参数会被作为监听器添加到'secureConnect'事件。

tls.connect() returns a tls.TLSSocket object.

tls.connect()返回一个tls.TLSSocket对象。

Here is an example of a client of echo server as described previously:

下面是一个上述应答服务器的客户端的例子:

var socket = tls.connect(8000, options, function() {
  console.log('client connected',
              socket.authorized ? 'authorized' : 'unauthorized');
  process.stdin.pipe(socket);
  process.stdin.resume();
});
socket.setEncoding('utf8');
socket.on('data', function(data) {
  console.log(data);
});
socket.on('end', function() {
  server.close();
});

Or

或者

var socket = tls.connect(8000, options, function() {
  console.log('client connected',
              socket.authorized ? 'authorized' : 'unauthorized');
  process.stdin.pipe(socket);
  process.stdin.resume();
});
socket.setEncoding('utf8');
socket.on('data', function(data) {
  console.log(data);
});
socket.on('end', function() {
  server.close();
});

类: tls.TLSSocket#

Wrapper for instance of net.Socket, replaces internal socket read/write routines to perform transparent encryption/decryption of incoming/outgoing data.

new tls.TLSSocket(socket, options)#

Construct a new TLSSocket object from existing TCP socket.

socket is an instance of net.Socket

socket是一个net.Socket示例。

options is an object that might contain following properties:

options是一个可能包含以下属性的对象:

tls.createSecurePair([credentials], [isServer], [requestCert], [rejectUnauthorized])#

稳定性: 0 - 已废弃。使用 tls.TLSSocket 替代。

Creates a new secure pair object with two streams, one of which reads/writes encrypted data, and one reads/writes cleartext data. Generally the encrypted one is piped to/from an incoming encrypted data stream, and the cleartext one is used as a replacement for the initial encrypted stream.

  • credentials: A credentials object from crypto.createCredentials( ... )

  • isServer: A boolean indicating whether this tls connection should be opened as a server or a client.

  • requestCert: A boolean indicating whether a server should request a certificate from a connecting client. Only applies to server connections.

  • rejectUnauthorized: A boolean indicating whether a server should automatically reject clients with invalid certificates. Only applies to servers with requestCert enabled.

tls.createSecurePair() returns a SecurePair object with cleartext and encrypted stream properties.

NOTE: cleartext has the same APIs as tls.TLSSocket

类: SecurePair#

Returned by tls.createSecurePair.

由tls.createSecurePair返回。

事件: 'secure'#

The event is emitted from the SecurePair once the pair has successfully established a secure connection.

Similarly to the checking for the server 'secureConnection' event, pair.cleartext.authorized should be checked to confirm whether the certificate used properly authorized.

类: tls.Server#

This class is a subclass of net.Server and has the same methods on it. Instead of accepting just raw TCP connections, this accepts encrypted connections using TLS or SSL.

事件: 'secureConnection'#

function (tlsSocket) {}

function (tlsSocket) {}

This event is emitted after a new connection has been successfully handshaked. The argument is a instance of tls.TLSSocket. It has all the common stream methods and events.

socket.authorized is a boolean value which indicates if the client has verified by one of the supplied certificate authorities for the server. If socket.authorized is false, then socket.authorizationError is set to describe how authorization failed. Implied but worth mentioning: depending on the settings of the TLS server, you unauthorized connections may be accepted. socket.npnProtocol is a string containing selected NPN protocol. socket.servername is a string containing servername requested with SNI.

Event: 'clientError'#

function (exception, tlsSocket) { }

function (exception, tlsSocket) { }

When a client connection emits an 'error' event before secure connection is established - it will be forwarded here.

tlsSocket is the tls.TLSSocket that the error originated from.

事件: 'newSession'#

function (sessionId, sessionData) { }

function (sessionId, sessionData) { }

Emitted on creation of TLS session. May be used to store sessions in external storage.

NOTE: adding this event listener will have an effect only on connections established after addition of event listener.

事件: 'resumeSession'#

function (sessionId, callback) { }

function (sessionId, callback) { }

Emitted when client wants to resume previous TLS session. Event listener may perform lookup in external storage using given sessionId, and invoke callback(null, sessionData) once finished. If session can't be resumed (i.e. doesn't exist in storage) one may call callback(null, null). Calling callback(err) will terminate incoming connection and destroy socket.

NOTE: adding this event listener will have an effect only on connections established after addition of event listener.

server.listen(port, [host], [callback])#

Begin accepting connections on the specified port and host. If the host is omitted, the server will accept connections directed to any IPv4 address (INADDR_ANY).

This function is asynchronous. The last parameter callback will be called when the server has been bound.

See net.Server for more information.

更多信息见net.Server

server.close()#

Stops the server from accepting new connections. This function is asynchronous, the server is finally closed when the server emits a 'close' event.

server.address()#

Returns the bound address, the address family name and port of the server as reported by the operating system. See net.Server.address() for more information.

server.addContext(hostname, credentials)#

Add secure context that will be used if client request's SNI hostname is matching passed hostname (wildcards can be used). credentials can contain key, cert and ca.

server.maxConnections#

Set this property to reject connections when the server's connection count gets high.

server.connections#

The number of concurrent connections on the server.

服务器的并发连接数.

类: CryptoStream#

稳定性: 0 - 已废弃。使用 tls.TLSSocket 替代。

This is an encrypted stream.

这是一个被加密的流。

cryptoStream.bytesWritten#

A proxy to the underlying socket's bytesWritten accessor, this will return the total bytes written to the socket, including the TLS overhead.

类: tls.TLSSocket#

This is a wrapped version of net.Socket that does transparent encryption of written data and all required TLS negotiation.

This instance implements a duplex Stream interfaces. It has all the common stream methods and events.

事件: 'secureConnect'#

This event is emitted after a new connection has been successfully handshaked. The listener will be called no matter if the server's certificate was authorized or not. It is up to the user to test tlsSocket.authorized to see if the server certificate was signed by one of the specified CAs. If tlsSocket.authorized === false then the error can be found in tlsSocket.authorizationError. Also if NPN was used - you can check tlsSocket.npnProtocol for negotiated protocol.

tlsSocket.authorized#

A boolean that is true if the peer certificate was signed by one of the specified CAs, otherwise false

tlsSocket.authorizationError#

The reason why the peer's certificate has not been verified. This property becomes available only when tlsSocket.authorized === false.

tlsSocket.getPeerCertificate()#

Returns an object representing the peer's certificate. The returned object has some properties corresponding to the field of the certificate.

Example:

示例:

{ subject: 
   { C: 'UK',
     ST: 'Acknack Ltd',
     L: 'Rhys Jones',
     O: 'node.js',
     OU: 'Test TLS Certificate',
     CN: 'localhost' },
  issuer: 
   { C: 'UK',
     ST: 'Acknack Ltd',
     L: 'Rhys Jones',
     O: 'node.js',
     OU: 'Test TLS Certificate',
     CN: 'localhost' },
  valid_from: 'Nov 11 09:52:22 2009 GMT',
  valid_to: 'Nov  6 09:52:22 2029 GMT',
  fingerprint: '2A:7A:C2:DD:E5:F9:CC:53:72:35:99:7A:02:5A:71:38:52:EC:8A:DF' }

If the peer does not provide a certificate, it returns null or an empty object.

tlsSocket.getCipher()#

Returns an object representing the cipher name and the SSL/TLS protocol version of the current connection.

Example: { name: 'AES256-SHA', version: 'TLSv1/SSLv3' }

Example: { name: 'AES256-SHA', version: 'TLSv1/SSLv3' }

See SSL_CIPHER_get_name() and SSL_CIPHER_get_version() in http://www.openssl.org/docs/ssl/ssl.html#DEALING_WITH_CIPHERS for more information.

tlsSocket.renegotiate(options, callback)#

Initiate TLS renegotiation process. The options may contain the following fields: rejectUnauthorized, requestCert (See tls.createServer for details). callback(err) will be executed with null as err, once the renegotiation is successfully completed.

NOTE: Can be used to request peer's certificate after the secure connection has been established.

ANOTHER NOTE: When running as the server, socket will be destroyed with an error after handshakeTimeout timeout.

tlsSocket.address()#

Returns the bound address, the address family name and port of the underlying socket as reported by the operating system. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

tlsSocket.remoteAddress#

The string representation of the remote IP address. For example, '74.125.127.100' or '2001:4860:a005::68'.

远程IP地址的字符串表示。例如,'74.125.127.100''2001:4860:a005::68'

tlsSocket.remotePort#

The numeric representation of the remote port. For example, 443.

远程端口的数值表示。例如, 443

tlsSocket.localAddress#

The string representation of the local IP address.

本地IP地址的字符串表达。

tlsSocket.localPort#

The numeric representation of the local port.

本地端口的数值表示。

字符串解码器#

稳定度: 3 - 稳定

To use this module, do require('string_decoder'). StringDecoder decodes a buffer to a string. It is a simple interface to buffer.toString() but provides additional support for utf8.

通过 require('string_decoder') 使用这个模块。这个模块将一个 Buffer 解码成一个字符串。他是 buffer.toString() 的一个简单接口,但提供对 utf8 的支持。

var euro = new Buffer([0xE2, 0x82, 0xAC]);
console.log(decoder.write(euro));

类: StringDecoder#

Accepts a single argument, encoding which defaults to utf8.

接受 encoding 一个参数,默认是 utf8

decoder.write(buffer)#

Returns a decoded string.

返回解码后的字符串。

decoder.end()#

Returns any trailing bytes that were left in the buffer.

返回 Buffer 中剩下的末尾字节。

File System#

稳定度: 3 - 稳定

File I/O is provided by simple wrappers around standard POSIX functions. To use this module do require('fs'). All the methods have asynchronous and synchronous forms.

文件系统模块是一个简单包装的标准 POSIX 文件 I/O 操作方法集。您可以通过调用require('fs')来获取该模块。文件系统模块中的所有方法均有异步和同步版本。

The asynchronous form always take a completion callback as its last argument. The arguments passed to the completion callback depend on the method, but the first argument is always reserved for an exception. If the operation was completed successfully, then the first argument will be null or undefined.

文件系统模块中的异步方法需要一个完成时的回调函数作为最后一个传入形参。 回调函数的构成由您调用的异步方法所决定,通常情况下回调函数的第一个形参为返回的错误信息。 如果异步操作执行正确并返回,该错误形参则为null或者undefined

When using the synchronous form any exceptions are immediately thrown. You can use try/catch to handle exceptions or allow them to bubble up.

如果您使用的是同步版本的操作方法,则一旦出现错误,会以通常的抛出错误的形式返回错误。 你可以用trycatch等语句来拦截错误并使程序继续进行。

Here is an example of the asynchronous version:

这里是一个异步版本的例子:

fs.unlink('/tmp/hello', function (err) {
  if (err) throw err;
  console.log('successfully deleted /tmp/hello');
});

Here is the synchronous version:

这是同步版本的例子:

fs.unlinkSync('/tmp/hello')
console.log('successfully deleted /tmp/hello');

With the asynchronous methods there is no guaranteed ordering. So the following is prone to error:

当使用异步版本时不能保证执行顺序,因此下面这个例子很容易出错:

fs.rename('/tmp/hello', '/tmp/world', function (err) {
  if (err) throw err;
  console.log('renamed complete');
});
fs.stat('/tmp/world', function (err, stats) {
  if (err) throw err;
  console.log('stats: ' + JSON.stringify(stats));
});

It could be that fs.stat is executed before fs.rename. The correct way to do this is to chain the callbacks.

fs.stat有可能在fs.rename前执行.要等到正确的执行顺序应该用下面的方法:

fs.rename('/tmp/hello', '/tmp/world', function (err) {
  if (err) throw err;
  fs.stat('/tmp/world', function (err, stats) {
    if (err) throw err;
    console.log('stats: ' + JSON.stringify(stats));
  });
});

In busy processes, the programmer is strongly encouraged to use the asynchronous versions of these calls. The synchronous versions will block the entire process until they complete--halting all connections.

在繁重的任务中,强烈推荐使用这些函数的异步版本.同步版本会阻塞进程,直到完成处理,也就是说会暂停所有的连接.

Relative path to filename can be used, remember however that this path will be relative to process.cwd().

可以使用文件名的相对路径, 但是记住这个路径是相对于process.cwd()的.

Most fs functions let you omit the callback argument. If you do, a default callback is used that rethrows errors. To get a trace to the original call site, set the NODE_DEBUG environment variable:

大部分的文件系统(fs)函数可以忽略回调函数(callback)这个参数.如果忽略它,将会由一个默认回调函数(callback)来重新抛出(rethrow)错误.要获得原调用点的堆栈跟踪(trace)信息,需要在环境变量里设置NODE_DEBUG.

$ env NODE_DEBUG=fs node script.js
fs.js:66
        throw err;
              ^
Error: EISDIR, read
    at rethrow (fs.js:61:21)
    at maybeCallback (fs.js:79:42)
    at Object.fs.readFile (fs.js:153:18)
    at bad (/path/to/script.js:2:17)
    at Object.<anonymous> (/path/to/script.js:5:1)
    <etc.>

fs.rename(oldPath, newPath, callback)#

Asynchronous rename(2). No arguments other than a possible exception are given to the completion callback.

异步版本的rename函数(2).完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.renameSync(oldPath, newPath)#

Synchronous rename(2).

同步版本的rename(2).

fs.ftruncate(fd, len, callback)#

Asynchronous ftruncate(2). No arguments other than a possible exception are given to the completion callback.

异步版本的ftruncate(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.ftruncateSync(fd, len)#

Synchronous ftruncate(2).

同步版本的ftruncate(2).

fs.truncate(path, len, callback)#

Asynchronous truncate(2). No arguments other than a possible exception are given to the completion callback.

异步版本的truncate(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.truncateSync(path, len)#

Synchronous truncate(2).

同步版本的truncate(2).

异步版本的chown.完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

Asynchronous chown(2). No arguments other than a possible exception are given to the completion callback.

异步版本的chown(2).完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.chownSync(path, uid, gid)#

Synchronous chown(2).

同步版本的chown(2).

fs.fchown(fd, uid, gid, callback)#

Asynchronous fchown(2). No arguments other than a possible exception are given to the completion callback.

异步版本的fchown(2)。回调函数的参数除了出现错误时有一个错误对象外,没有其它参数。

fs.fchownSync(fd, uid, gid)#

Synchronous fchown(2).

同步版本的fchown(2).

fs.lchown(path, uid, gid, callback)#

Asynchronous lchown(2). No arguments other than a possible exception are given to the completion callback.

异步版的lchown(2)。完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.lchownSync(path, uid, gid)#

Synchronous lchown(2).

同步版本的lchown(2).

fs.chmod(path, mode, callback)#

Asynchronous chmod(2). No arguments other than a possible exception are given to the completion callback.

异步版的 chmod(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.chmodSync(path, mode)#

Synchronous chmod(2).

同步版的 chmod(2).

fs.fchmod(fd, mode, callback)#

Asynchronous fchmod(2). No arguments other than a possible exception are given to the completion callback.

异步版的 fchmod(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.fchmodSync(fd, mode)#

Synchronous fchmod(2).

同步版的 fchmod(2).

fs.lchmod(path, mode, callback)#

Asynchronous lchmod(2). No arguments other than a possible exception are given to the completion callback.

异步版的 lchmod(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

Only available on Mac OS X.

仅在 Mac OS X 系统下可用。

fs.lchmodSync(path, mode)#

Synchronous lchmod(2).

同步版的 lchmod(2).

fs.stat(path, callback)#

Asynchronous stat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. See the fs.Stats section below for more information.

异步版的 stat(2). 回调函数(callback) 接收两个参数: (err, stats) ,其中 stats 是一个 fs.Stats 对象。 详情请参考 fs.Stats

fs.lstat(path, callback)#

Asynchronous lstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. lstat() is identical to stat(), except that if path is a symbolic link, then the link itself is stat-ed, not the file that it refers to.

异步版的 lstat(2). 回调函数(callback)接收两个参数: (err, stats) 其中 stats 是一个 fs.Stats 对象。 lstat()stat() 相同,区别在于: 若 path 是一个符号链接时(symbolic link),读取的是该符号链接本身,而不是它所 链接到的文件。

fs.fstat(fd, callback)#

Asynchronous fstat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. fstat() is identical to stat(), except that the file to be stat-ed is specified by the file descriptor fd.

异步版的 fstat(2). 回调函数(callback)接收两个参数: (err, stats) 其中 stats 是一个 fs.Stats 对象。 fstat()stat() 相同,区别在于: 要读取的文件(译者注:即第一个参数)是一个文件描述符(file descriptor) fd

fs.statSync(path)#

Synchronous stat(2). Returns an instance of fs.Stats.

同步版的 stat(2). 返回一个 fs.Stats 实例。

fs.lstatSync(path)#

Synchronous lstat(2). Returns an instance of fs.Stats.

同步版的 lstat(2). 返回一个 fs.Stats 实例。

fs.fstatSync(fd)#

Synchronous fstat(2). Returns an instance of fs.Stats.

同步版的 fstat(2). 返回一个 fs.Stats 实例。

fs.link(srcpath, dstpath, callback)#

Asynchronous link(2). No arguments other than a possible exception are given to the completion callback.

异步版的 link(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。

fs.linkSync(srcpath, dstpath)#

Synchronous link(2).

同步版的 link(2).

fs.symlink(srcpath, dstpath, [type], callback)#

Asynchronous symlink(2). No arguments other than a possible exception are given to the completion callback. type argument can be either 'dir', 'file', or 'junction' (default is 'file'). It is only used on Windows (ignored on other platforms). Note that Windows junction points require the destination path to be absolute. When using 'junction', the destination argument will automatically be normalized to absolute path.

异步版的 symlink(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。 type 可以是 'dir', 'file', 或者'junction' (默认是 'file'),此参数仅用于 Windows 系统(其他系统平台会被忽略)。 注意: Windows 系统要求目标路径(译者注:即 dstpath 参数)必须是一个绝对路径,当使用 'junction' 时,dstpath 参数会自动转换为绝对路径。

fs.symlinkSync(srcpath, dstpath, [type])#

Synchronous symlink(2).

同步版的 symlink(2).

fs.readlink(path, callback)#

Asynchronous readlink(2). The callback gets two arguments (err, linkString).

异步版的 readlink(2). 回调函数(callback)接收两个参数: (err, linkString).

fs.readlinkSync(path)#

Synchronous readlink(2). Returns the symbolic link's string value.

同步版的 readlink(2). 返回符号链接(symbolic link)的字符串值。

fs.realpath(path, [cache], callback)#

Asynchronous realpath(2). The callback gets two arguments (err, resolvedPath). May use process.cwd to resolve relative paths. cache is an object literal of mapped paths that can be used to force a specific path resolution or avoid additional fs.stat calls for known real paths.

异步版的 realpath(2). 回调函数(callback)接收两个参数: (err, resolvedPath). May use process.cwd to resolve relative paths. cache is an object literal of mapped paths that can be used to force a specific path resolution or avoid additional fs.stat calls for known real paths.

Example:

示例:

var cache = {'/etc':'/private/etc'};
fs.realpath('/etc/passwd', cache, function (err, resolvedPath) {
  if (err) throw err;
  console.log(resolvedPath);
});

fs.realpathSync(path, [cache])#

Synchronous realpath(2). Returns the resolved path.

realpath(2) 的同步版本。返回解析出的路径。

fs.unlink(path, callback)#

Asynchronous unlink(2). No arguments other than a possible exception are given to the completion callback.

异步版的 unlink(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.unlinkSync(path)#

Synchronous unlink(2).

同步版的 unlink(2).

fs.rmdir(path, callback)#

Asynchronous rmdir(2). No arguments other than a possible exception are given to the completion callback.

异步版的 rmdir(2). 异步版的 link(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。

fs.rmdirSync(path)#

Synchronous rmdir(2).

同步版的 rmdir(2).

fs.mkdir(path, [mode], callback)#

Asynchronous mkdir(2). No arguments other than a possible exception are given to the completion callback. mode defaults to 0777.

异步版的 mkdir(2)。 异步版的 link(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息。文件 mode 默认为 0777

fs.mkdirSync(path, [mode])#

Synchronous mkdir(2).

同步版的 mkdir(2)。

fs.readdir(path, callback)#

Asynchronous readdir(3). Reads the contents of a directory. The callback gets two arguments (err, files) where files is an array of the names of the files in the directory excluding '.' and '..'.

异步版的 readdir(3)。 读取 path 路径所在目录的内容。 回调函数 (callback) 接受两个参数 (err, files) 其中 files 是一个存储目录中所包含的文件名称的数组,数组中不包括 '.''..'

fs.readdirSync(path)#

Synchronous readdir(3). Returns an array of filenames excluding '.' and '..'.

同步版的 readdir(3). 返回文件名数组,其中不包括 '.''..' 目录.

fs.close(fd, callback)#

Asynchronous close(2). No arguments other than a possible exception are given to the completion callback.

异步版 close(2). 完成时的回调函数(callback)只接受一个参数:可能出现的异常信息.

fs.closeSync(fd)#

Synchronous close(2).

同步版的 close(2).

fs.open(path, flags, [mode], callback)#

Asynchronous file open. See open(2). flags can be:

异步版的文件打开. 详见 open(2). flags 可以是:

  • 'r' - Open file for reading. An exception occurs if the file does not exist.

  • 'r' - 以【只读】的方式打开文件. 当文件不存在时产生异常.

  • 'r+' - Open file for reading and writing. An exception occurs if the file does not exist.

  • 'r+' - 以【读写】的方式打开文件. 当文件不存在时产生异常.

  • 'rs' - Open file for reading in synchronous mode. Instructs the operating system to bypass the local file system cache.

  • 'rs' - 同步模式下,以【只读】的方式打开文件. 指令绕过操作系统的本地文件系统缓存.

    This is primarily useful for opening files on NFS mounts as it allows you to skip the potentially stale local cache. It has a very real impact on I/O performance so don't use this flag unless you need it.

该功能主要用于打开 NFS 挂载的文件, 因为它可以让你跳过默认使用的过时本地缓存. 但这实际上非常影响 I/O 操作的性能, 因此除非你确实有这样的需求, 否则请不要使用该标志.

Note that this doesn't turn fs.open() into a synchronous blocking call. If that's what you want then you should be using fs.openSync()

注意: 这并不意味着 fs.open() 变成了一个同步阻塞的请求. 如果你想要一个同步阻塞的请求你应该使用 fs.openSync().

  • 'rs+' - Open file for reading and writing, telling the OS to open it synchronously. See notes for 'rs' about using this with caution.

  • 'rs+' - 同步模式下, 以【读写】的方式打开文件. 请谨慎使用该方式, 详细请查看 'rs' 的注释.

  • 'w' - Open file for writing. The file is created (if it does not exist) or truncated (if it exists).

  • 'w' - 以【只写】的形式打开文件. 文件会被创建 (如果文件不存在) 或者覆盖 (如果存在).

  • 'wx' - Like 'w' but fails if path exists.

  • 'wx' - 类似 'w' 区别是如果文件存在则操作会失败.

  • 'w+' - Open file for reading and writing. The file is created (if it does not exist) or truncated (if it exists).

  • 'w+' - 以【读写】的方式打开文件. 文件会被创建 (如果文件不存在) 或者覆盖 (如果存在).

  • 'wx+' - Like 'w+' but fails if path exists.

  • 'wx+' - 类似 'w+' 区别是如果文件存在则操作会失败.

  • 'a' - Open file for appending. The file is created if it does not exist.

  • 'a' - 以【附加】的形式打开文件,即新写入的数据会附加在原来的文件内容之后. 如果文件不存在则会默认创建.

  • 'ax' - Like 'a' but fails if path exists.

  • 'ax' - 类似 'a' 区别是如果文件存在则操作会失败.

  • 'a+' - Open file for reading and appending. The file is created if it does not exist.

  • 'a+' - 以【读取】和【附加】的形式打开文件. 如果文件不存在则会默认创建.

  • 'ax+' - Like 'a+' but fails if path exists.

  • 'ax+' - 类似 'a+' 区别是如果文件存在则操作会失败.

mode sets the file mode (permission and sticky bits), but only if the file was created. It defaults to 0666, readable and writeable.

参数 mode 用于设置文件模式 (permission and sticky bits), 不过前提是这个文件是已存在的. 默认情况下是 0666, 有可读和可写权限.

The callback gets two arguments (err, fd).

该 callback 接收两个参数 (err, fd).

The exclusive flag 'x' (O_EXCL flag in open(2)) ensures that path is newly created. On POSIX systems, path is considered to exist even if it is a symlink to a non-existent file. The exclusive flag may or may not work with network file systems.

排除 (exclusive) 标识 'x' (对应 open(2) 的 O_EXCL 标识) 保证 path 是一个新建的文件。 POSIX 操作系统上,即使 path 是一个指向不存在位置的符号链接,也会被认定为文件存在。 排除标识在网络文件系统不能确定是否有效。

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

在 Linux 上,无法对以追加 (append) 模式打开的文件进行指定位置的写入操作。 内核会忽略位置参数并且总是将数据追加到文件尾部。

fs.openSync(path, flags, [mode])#

Synchronous version of fs.open().

fs.open() 的同步版.

fs.utimes(path, atime, mtime, callback)#

fs.utimesSync(path, atime, mtime)#

fs.utimes(path, atime, mtime, callback)#

fs.utimesSync(path, atime, mtime)#

Change file timestamps of the file referenced by the supplied path.

更改 path 所指向的文件的时间戳。

fs.futimes(fd, atime, mtime, callback)#

fs.futimesSync(fd, atime, mtime)#

fs.futimes(fd, atime, mtime, callback)#

fs.futimesSync(fd, atime, mtime)#

Change the file timestamps of a file referenced by the supplied file descriptor.

更改文件描述符 (file discriptor) 所指向的文件的时间戳。

fs.fsync(fd, callback)#

Asynchronous fsync(2). No arguments other than a possible exception are given to the completion callback.

异步版本的 fsync(2)。回调函数仅含有一个异常 (exception) 参数。

fs.fsyncSync(fd)#

Synchronous fsync(2).

fsync(2) 的同步版本。

fs.write(fd, buffer, offset, length[, position], callback)#

Write buffer to the file specified by fd.

通过文件标识fd,向指定的文件中写入buffer

offset and length determine the part of the buffer to be written.

offsetlength 可以确定从哪个位置开始写入buffer。

position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number', the data will be written at the current position. See pwrite(2).

position 是参考当前文档光标的位置,然后从该处写入数据。如果typeof position !== 'number',那么数据会从当前文档位置写入,请看pwrite(2)。

The callback will be given three arguments (err, written, buffer) where written specifies how many bytes were written from buffer.

回调中会给出三个参数 (err, written, buffer)written 说明从buffer写入的字节数。

Note that it is unsafe to use fs.write multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.

注意,fs.write多次地在同一个文件中使用而没有等待回调是不安全的。在这种情况下,强烈推荐使用fs.createWriteStream

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

在 Linux 上,无法对以追加 (append) 模式打开的文件进行指定位置的写入操作。 内核会忽略位置参数并且总是将数据追加到文件尾部。

fs.write(fd, data[, position[, encoding]], callback)#

Write data to the file specified by fd. If data is not a Buffer instance then the value will be coerced to a string.

data写入到文档中通过指定的fd,如果data不是buffer对象的实例则会把值强制转化成一个字符串。

position refers to the offset from the beginning of the file where this data should be written. If typeof position !== 'number' the data will be written at the current position. See pwrite(2).

position 是参考当前文档光标的位置,然后从该处写入数据。如果typeof position !== 'number',那么数据会从当前文档位置写入,请看pwrite(2)。

encoding is the expected string encoding.

encoding 是预期得到一个字符串编码

The callback will receive the arguments (err, written, string) where written specifies how many bytes the passed string required to be written. Note that bytes written is not the same as string characters. See Buffer.byteLength.

回调会得到这些参数 (err, written, string)written表明传入的string需要写入的字符串长度。注意字节的写入跟字符串写入是不一样的。请看Buffer.byteLength.

Unlike when writing buffer, the entire string must be written. No substring may be specified. This is because the byte offset of the resulting data may not be the same as the string offset.

与写入buffer不同,必须写入完整的字符串,截取字符串不是符合规定的。这是因为返回的字节的位移跟字符串的位移是不一样的。

Note that it is unsafe to use fs.write multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.

注意,fs.write多次地在同一个文件中使用而没有等待回调是不安全的。在这种情况下,强烈推荐使用fs.createWriteStream

On Linux, positional writes don't work when the file is opened in append mode. The kernel ignores the position argument and always appends the data to the end of the file.

在 Linux 上,无法对以追加 (append) 模式打开的文件进行指定位置的写入操作。 内核会忽略位置参数并且总是将数据追加到文件尾部。

fs.writeSync(fd, buffer, offset, length[, position])#

fs.writeSync(fd, data[, position[, encoding]])#

Synchronous versions of fs.write(). Returns the number of bytes written.

同步版本的fs.write()。返回写入的字节数。

fs.read(fd, buffer, offset, length, position, callback)#

Read data from the file specified by fd.

从指定的文档标识符fd读取文件数据。

buffer is the buffer that the data will be written to.

buffer 是缓冲区,数据将会写入这里。

offset is the offset in the buffer to start writing at.

offset 是开始向缓冲区 buffer 写入的偏移量。

length is an integer specifying the number of bytes to read.

length 是一个整形值,指定了读取的字节数。

position is an integer specifying where to begin reading from in the file. If position is null, data will be read from the current file position.

position 是一个整形值,指定了从哪里开始读取文件,如果positionnull,将会从文件当前的位置读取数据。

The callback is given the three arguments, (err, bytesRead, buffer).

回调函数给定了三个参数, (err, bytesRead, buffer), 分别为错误,读取的字节和缓冲区。

fs.readSync(fd, buffer, offset, length, position)#

Synchronous version of fs.read. Returns the number of bytesRead.

fs.read 函数的同步版本。 返回bytesRead的个数。

fs.readFile(filename, [options], callback)#

  • filename String
  • options Object
    • encoding String | Null default = null
    • flag String default = 'r'
  • callback Function

  • filename String

  • options Object
    • encoding String | Null default = null
    • flag String default = 'r'
  • callback Function

Asynchronously reads the entire contents of a file. Example:

异步读取一个文件的全部内容。举例:

fs.readFile('/etc/passwd', function (err, data) {
  if (err) throw err;
  console.log(data);
});

The callback is passed two arguments (err, data), where data is the contents of the file.

回调函数传递了两个参数 (err, data), data 就是文件的内容。

If no encoding is specified, then the raw buffer is returned.

如果未指定编码方式,原生buffer就会被返回。

fs.readFileSync(filename, [options])#

Synchronous version of fs.readFile. Returns the contents of the filename.

fs.readFile的同步版本。 返回文件名为 filename 的文件内容。

If the encoding option is specified then this function returns a string. Otherwise it returns a buffer.

如果 encoding 选项被指定, 那么这个函数返回一个字符串。如果未指定,则返回一个原生buffer。

fs.writeFile(filename, data, [options], callback)#

  • filename String
  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'w'
  • callback Function

  • filename String

  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'w'
  • callback Function

Asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer.

异步的将数据写入一个文件, 如果文件原先存在,会被替换。 data 可以是一个string,也可以是一个原生buffer。

The encoding option is ignored if data is a buffer. It defaults to 'utf8'.

encoding 选项会被忽视如果 data 不是string而是原生buffer。encoding缺省为 'utf8'

Example:

示例:

fs.writeFile('message.txt', 'Hello Node', function (err) {
  if (err) throw err;
  console.log('It\'s saved!'); //文件被保存
});

fs.writeFileSync(filename, data, [options])#

The synchronous version of fs.writeFile.

fs.writeFile的同步版本。

fs.appendFile(filename, data, [options], callback)#

  • filename String
  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'a'
  • callback Function

  • filename String

  • data String | Buffer
  • options Object
    • encoding String | Null default = 'utf8'
    • mode Number default = 438 (aka 0666 in Octal)
    • flag String default = 'a'
  • callback Function

Asynchronously append data to a file, creating the file if it not yet exists. data can be a string or a buffer.

异步的将数据添加到一个文件的尾部,如果文件不存在,会创建一个新的文件。 data 可以是一个string,也可以是原生buffer。

Example:

示例:

fs.appendFile('message.txt', 'data to append', function (err) {
  if (err) throw err;
  console.log('The "data to append" was appended to file!'); //数据被添加到文件的尾部
});

fs.appendFileSync(filename, data, [options])#

The synchronous version of fs.appendFile.

fs.appendFile的同步版本。

fs.watchFile(filename, [options], listener)#

稳定性: 2 - 不稳定.   尽可能的话推荐使用 fs.watch 来代替。

Watch for changes on filename. The callback listener will be called each time the file is accessed.

监视filename指定的文件的改变. 回调函数 listener 会在文件每一次被访问时被调用。

The second argument is optional. The options if provided should be an object containing two members a boolean, persistent, and interval. persistent indicates whether the process should continue to run as long as files are being watched. interval indicates how often the target should be polled, in milliseconds. The default is { persistent: true, interval: 5007 }.

第二个参数是可选的。 如果提供此参数,options 应该是包含两个成员persistentinterval的对象,其中persistent值为boolean类型。persistent指定进程是否应该在文件被监视(watch)时继续运行,interval指定了目标文件被查询的间隔,以毫秒为单位。缺省值为{ persistent: true, interval: 5007 }

The listener gets two arguments the current stat object and the previous stat object:

listener 有两个参数,第一个为文件现在的状态,第二个为文件的前一个状态。

fs.watchFile('message.text', function (curr, prev) {
  console.log('the current mtime is: ' + curr.mtime);
  console.log('the previous mtime was: ' + prev.mtime);
});

These stat objects are instances of fs.Stat.

listener中的文件状态对象类型为fs.Stat

If you want to be notified when the file was modified, not just accessed you need to compare curr.mtime and prev.mtime.

如果你只想在文件被修改时被告知,而不是仅仅在被访问时就告知,你应当在listener回调函数中比较下两个状态对象的mtime属性。即curr.mtimeprev.mtime.

fs.unwatchFile(filename, [listener])#

稳定性: 2 - 不稳定.   尽可能的话推荐使用 fs.watch 来代替。

Stop watching for changes on filename. If listener is specified, only that particular listener is removed. Otherwise, all listeners are removed and you have effectively stopped watching filename.

停止监视文件名为 filename的文件. 如果 listener 参数被指定, 会移除在fs.watchFile函数中指定的那一个listener回调函数。 否则, 所有的 回调函数都会被移除,你将彻底停止监视filename文件。

Calling fs.unwatchFile() with a filename that is not being watched is a no-op, not an error.

调用 fs.unwatchFile() 时,传递的文件名为未被监视的文件时,不会发生错误,而会发生一个no-op。

fs.watch(filename, [options], [listener])#

稳定性: 2 - 不稳定的

Watch for changes on filename, where filename is either a file or a directory. The returned object is a fs.FSWatcher.

观察指定路径的改变,filename 路径可以是文件或者目录。改函数返回的对象是 fs.FSWatcher

The second argument is optional. The options if provided should be an object containing a boolean member persistent, which indicates whether the process should continue to run as long as files are being watched. The default is { persistent: true }.

第二个参数是可选的. 如果 options 选项被提供那么它应当是一个只包含成员persistent得对象, persistent为boolean类型。persistent指定了进程是否“只要文件被监视就继续执行”缺省值为 { persistent: true }.

The listener callback gets two arguments (event, filename). event is either 'rename' or 'change', and filename is the name of the file which triggered the event.

监听器的回调函数得到两个参数 (event, filename)。其中 event 是 'rename'(重命名)或者 'change'(改变),而 filename 则是触发事件的文件名。

注意事项#

The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.

fs.watch 不是完全跨平台的,且在某些情况下不可用。

可用性#

This feature depends on the underlying operating system providing a way to be notified of filesystem changes.

此功能依赖于操作系统底层提供的方法来监视文件系统的变化。

  • On Linux systems, this uses inotify.
  • On BSD systems (including OS X), this uses kqueue.
  • On SunOS systems (including Solaris and SmartOS), this uses event ports.
  • On Windows systems, this feature depends on ReadDirectoryChangesW.

  • 在 Linux 操作系统上,使用 inotify

  • 在 BSD 操作系统上 (包括 OS X),使用 kqueue
  • 在 SunOS 操作系统上 (包括 Solaris 和 SmartOS),使用 event ports
  • 在 Windows 操作系统上,该特性依赖于 ReadDirectoryChangesW

If the underlying functionality is not available for some reason, then fs.watch will not be able to function. For example, watching files or directories on network file systems (NFS, SMB, etc.) often doesn't work reliably or at all.

如果系统底层函数出于某些原因不可用,那么 fs.watch 也就无法工作。例如,监视网络文件系统(如 NFS, SMB 等)的文件或者目录,就时常不能稳定的工作,有时甚至完全不起作用。

You can still use fs.watchFile, which uses stat polling, but it is slower and less reliable.

你仍然可以调用使用了文件状态调查的 fs.watchFile,但是会比较慢而且比较不可靠。

文件名参数#

Providing filename argument in the callback is not supported on every platform (currently it's only supported on Linux and Windows). Even on supported platforms filename is not always guaranteed to be provided. Therefore, don't assume that filename argument is always provided in the callback, and have some fallback logic if it is null.

在回调函数中提供的 filename 参数不是在每一个操作系统中都被支持(当下仅在Linux和Windows上支持)。 即便是在支持的系统中,filename也不能保证在每一次回调都被提供。因此,不要假设filename参数总会会在 回调函数中提供,在回调函数中添加检测filename是否为null的if判断语句。

fs.watch('somedir', function (event, filename) {
  console.log('event is: ' + event);
  if (filename) {
    console.log('filename provided: ' + filename);
  } else {
    console.log('filename not provided');
  }
});

fs.exists(path, callback)#

Test whether or not the given path exists by checking with the file system. Then call the callback argument with either true or false. Example:

检查指定路径的文件或者目录是否存在。接着通过 callback 传入的参数指明存在 (true) 或者不存在 (false)。示例:

fs.exists('/etc/passwd', function (exists) {
  util.debug(exists ? "存在" : "不存在!");
});

fs.existsSync(path)#

Synchronous version of fs.exists.

fs.exists 函数的同步版。

Class: fs.Stats#

Objects returned from fs.stat(), fs.lstat() and fs.fstat() and their synchronous counterparts are of this type.

fs.stat(), fs.lstat()fs.fstat() 以及他们对应的同步版本返回的对象。

  • stats.isFile()
  • stats.isDirectory()
  • stats.isBlockDevice()
  • stats.isCharacterDevice()
  • stats.isSymbolicLink() (only valid with fs.lstat())
  • stats.isFIFO()
  • stats.isSocket()

  • stats.isFile()

  • stats.isDirectory()
  • stats.isBlockDevice()
  • stats.isCharacterDevice()
  • stats.isSymbolicLink() (仅在与 fs.lstat()一起使用时合法)
  • stats.isFIFO()
  • stats.isSocket()

For a regular file util.inspect(stats) would return a string very similar to this:

对于一个普通文件使用 util.inspect(stats) 将会返回一个类似如下输出的字符串:

{ dev: 2114,
  ino: 48064969,
  mode: 33188,
  nlink: 1,
  uid: 85,
  gid: 100,
  rdev: 0,
  size: 527,
  blksize: 4096,
  blocks: 8,
  atime: Mon, 10 Oct 2011 23:24:11 GMT,
  mtime: Mon, 10 Oct 2011 23:24:11 GMT,
  ctime: Mon, 10 Oct 2011 23:24:11 GMT,
  birthtime: Mon, 10 Oct 2011 23:24:11 GMT }

Please note that atime, mtime, birthtime, and ctime are instances of Date object and to compare the values of these objects you should use appropriate methods. For most general uses getTime() will return the number of milliseconds elapsed since 1 January 1970 00:00:00 UTC and this integer should be sufficient for any comparison, however there additional methods which can be used for displaying fuzzy information. More details can be found in the MDN JavaScript Reference page.

请注意 atime, mtime, birthtime, and ctimeDate 对象的实例。而且在比较这些对象的值时你应当使用合适的方法。 大部分情况下,使用 getTime() 将会返回自 1 January 1970 00:00:00 UTC 以来逝去的毫秒数, 而且这个整形值应该能满足任何比较的使用条件。但是仍然还有一些额外的方法可以用来显示一些模糊的信息。更多细节请查看 MDN JavaScript Reference 页面。

Stat Time Values#

The times in the stat object have the following semantics:

在状态对象(stat object)中的时间有以下语义:

  • atime "Access Time" - Time when file data last accessed. Changed by the mknod(2), utimes(2), and read(2) system calls.
  • mtime "Modified Time" - Time when file data last modified. Changed by the mknod(2), utimes(2), and write(2) system calls.
  • ctime "Change Time" - Time when file status was last changed (inode data modification). Changed by the chmod(2), chown(2), link(2), mknod(2), rename(2), unlink(2), utimes(2), read(2), and write(2) system calls.
  • birthtime "Birth Time" - Time of file creation. Set once when the file is created. On filesystems where birthtime is not available, this field may instead hold either the ctime or 1970-01-01T00:00Z (ie, unix epoch timestamp 0). On Darwin and other FreeBSD variants, also set if the atime is explicitly set to an earlier value than the current birthtime using the utimes(2) system call.

  • atime "Access Time" - 文件数据上次被访问的时间.
    会被 mknod(2), utimes(2), and read(2) 等系统调用改变。

  • mtime "Modified Time" - 文件上次被修改的时间。 会被 mknod(2), utimes(2), and write(2) 等系统调用改变。
  • ctime "Change Time" - 文件状态上次改变的时间。 (inode data modification). 会被 chmod(2), chown(2), link(2), mknod(2), rename(2), unlink(2), utimes(2), read(2), and write(2) 等系统调用改变。
  • birthtime "Birth Time" - 文件被创建的时间。 会在文件被创建时生成。 在一些不提供文件birthtime的文件系统中, 这个字段会被 ctime1970-01-01T00:00Z (ie, unix epoch timestamp 0)来填充。 在 Darwin 和其他 FreeBSD 系统变体中, 也将 atime 显式地设置成比它现在的 birthtime 更早的一个时间值,这个过程使用了utimes(2)系统调用。

Prior to Node v0.12, the ctime held the birthtime on Windows systems. Note that as of v0.12, ctime is not "creation time", and on Unix systems, it never was.

在Node v0.12版本之前, ctime 持有Windows系统的 birthtime 值. 注意在v.0.12版本中, ctime 不再是"creation time", 而且在Unix系统中,他从来都不是。

fs.createReadStream(path, [options])#

Returns a new ReadStream object (See Readable Stream).

返回一个新的 ReadStream 对象 (详见 Readable Stream).

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ flags: 'r',
  encoding: null,
  fd: null,
  mode: 0666,
  autoClose: true
}

options can include start and end values to read a range of bytes from the file instead of the entire file. Both start and end are inclusive and start at 0. The encoding can be 'utf8', 'ascii', or 'base64'.

options 可以提供 startend 值用于读取文件内的特定范围而非整个文件。 startend 都是包含在范围内的(inclusive, 可理解为闭区间)并且以 0 开始。 encoding 可选为 'utf8', 'ascii' 或者 'base64'

If autoClose is false, then the file descriptor won't be closed, even if there's an error. It is your responsibility to close it and make sure there's no file descriptor leak. If autoClose is set to true (default behavior), on error or end the file descriptor will be closed automatically.

如果 autoClose 为 false 则即使在发生错误时也不会关闭文件描述符 (file descriptor)。 此时你需要负责关闭文件,避免文件描述符泄露 (leak)。 如果 autoClose 为 true (缺省值), 当发生 error 或者 end 事件时,文件描述符会被自动释放。

An example to read the last 10 bytes of a file which is 100 bytes long:

一个从100字节的文件中读取最后10字节的例子:

fs.createReadStream('sample.txt', {start: 90, end: 99});

Class: fs.ReadStream#

ReadStream is a Readable Stream.

ReadStream 是一个可读的流(Readable Stream).

事件: 'open'#

  • fd Integer file descriptor used by the ReadStream.

  • fd 整形 ReadStream 所使用的文件描述符。

Emitted when the ReadStream's file is opened.

当文件的 ReadStream 被创建时触发。

fs.createWriteStream(path, [options])#

Returns a new WriteStream object (See Writable Stream).

返回一个新的 WriteStream 对象 (详见 Writable Stream).

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ flags: 'w',
  encoding: null,
  mode: 0666 }

options may also include a start option to allow writing data at some position past the beginning of the file. Modifying a file rather than replacing it may require a flags mode of r+ rather than the default mode w.

options 也可以包含一个 start 选项用于指定在文件中开始写入数据的位置。 修改而不替换文件需要 flags 的模式指定为 r+ 而不是默值的 w.

Class: fs.WriteStream#

WriteStream is a Writable Stream.

WriteStream 是一个可写的流(Writable Stream).

事件: 'open'#

  • fd Integer file descriptor used by the WriteStream.

  • fd 整形 WriteStream 所使用的文件描述符。

Emitted when the WriteStream's file is opened.

当 WriteStream 创建时触发。

file.bytesWritten#

The number of bytes written so far. Does not include data that is still queued for writing.

已写的字节数。不包含仍在队列中准备写入的数据。

Class: fs.FSWatcher#

Objects returned from fs.watch() are of this type.

fs.watch() 返回的对象类型。

watcher.close()#

Stop watching for changes on the given fs.FSWatcher.

停止观察 fs.FSWatcher 对象中的更改。

事件: 'change'#

  • event String The type of fs change
  • filename String The filename that changed (if relevant/available)

  • event 字符串 fs 改变的类型

  • filename 字符串 改变的文件名 (if relevant/available)

Emitted when something changes in a watched directory or file. See more details in fs.watch.

当正在观察的目录或文件发生变动时触发。更多细节,详见 fs.watch

事件: 'error'#

  • error Error object

  • error Error 对象

Emitted when an error occurs.

当产生错误时触发

路径 (Path)#

稳定度: 3 - 稳定

This module contains utilities for handling and transforming file paths. Almost all these methods perform only string transformations. The file system is not consulted to check whether paths are valid.

本模块包含一套用于处理和转换文件路径的工具集。几乎所有的方法仅对字符串进行转换, 文件系统是不会检查路径是否真实有效的。

Use require('path') to use this module. The following methods are provided:

通过 require('path') 来加载此模块。以下是本模块所提供的方法:

path.normalize(p)#

Normalize a string path, taking care of '..' and '.' parts.

规范化字符串路径,注意 '..' 和 `'.' 部分

When multiple slashes are found, they're replaced by a single one; when the path contains a trailing slash, it is preserved. On Windows backslashes are used.

当发现有多个连续的斜杠时,会替换成一个; 当路径末尾包含斜杠时,会保留; 在 Windows 系统会使用反斜杠。

Example:

示例:

path.normalize('/foo/bar//baz/asdf/quux/..')
// returns
'/foo/bar/baz/asdf'

path.join([path1], [path2], [...])#

Join all arguments together and normalize the resulting path.

组合参数中的所有路径,返回规范化后的路径。

Arguments must be strings. In v0.8, non-string arguments were silently ignored. In v0.10 and up, an exception is thrown.

参数必须是字符串。在 v0.8 版本非字符串参数会被悄悄忽略。 在 v0.10 及以后版本将会抛出一个异常。

Example:

示例:

path.join('foo', {}, 'bar')
// 抛出异常
TypeError: Arguments to path.join must be strings

path.resolve([from ...], to)#

Resolves to to an absolute path.

to 参数解析为一个绝对路径。

If to isn't already absolute from arguments are prepended in right to left order, until an absolute path is found. If after using all from paths still no absolute path is found, the current working directory is used as well. The resulting path is normalized, and trailing slashes are removed unless the path gets resolved to the root directory. Non-string arguments are ignored.

如果to不是一个相对于from 参数的绝对路径,to会被添加到from的右边,直到找出一个绝对路径为止。如果使用from路径且仍没有找到绝对路径时,使用当时路径作为目录。返回的结果已经规范化,得到的路径会去掉结尾的斜杠,除非得到的当前路径为root目录。非字符串参数将被忽略。

Another way to think of it is as a sequence of cd commands in a shell.

另一种思考方式就是像在shell里面用一系列的‘cd’命令一样.

path.resolve('foo/bar', '/tmp/file/', '..', 'a/../subfile')

Is similar to:

相当于:

cd foo/bar
cd /tmp/file/
cd ..
cd a/../subfile
pwd

The difference is that the different paths don't need to exist and may also be files.

不同的是,不同的路径不需要存在的,也可能是文件。

Examples:

示例:

path.resolve('wwwroot', 'static_files/png/', '../gif/image.gif')
// 如果当前工作目录为 /home/myself/node,它返回:
'/home/myself/node/wwwroot/static_files/gif/image.gif'

path.isAbsolute(path)#

Determines whether path is an absolute path. An absolute path will always resolve to the same location, regardless of the working directory.

判定path是否为绝对路径。一个绝对路径总是指向一个相同的位置,无论当前工作目录是在哪里。

Posix examples:

Posix 示例:

path.isAbsolute('/foo/bar') // true
path.isAbsolute('/baz/..')  // true
path.isAbsolute('qux/')     // false
path.isAbsolute('.')        // false

Windows examples:

Windows 示例:

path.isAbsolute('//server')  // true
path.isAbsolute('C:/foo/..') // true
path.isAbsolute('bar\\baz')   // false
path.isAbsolute('.')         // false

path.relative(from, to)#

Solve the relative path from from to to.

解决从fromto的相对路径。

At times we have two absolute paths, and we need to derive the relative path from one to the other. This is actually the reverse transform of path.resolve, which means we see that:

有时我们有2个绝对路径, 我们需要从中找出相对目录的起源目录。这完全是path.resolve的相反实现,我们可以看看是什么意思:

path.resolve(from, path.relative(from, to)) == path.resolve(to)

Examples:

示例:

path.relative('/data/orandea/test/aaa', '/data/orandea/impl/bbb')
// 返回
'../../impl/bbb'

path.dirname(p)#

Return the directory name of a path. Similar to the Unix dirname command.

返回路径中文件夹的名称. 类似于Unix的dirname 命令.

Example:

示例:

path.dirname('/foo/bar/baz/asdf/quux')
// returns
'/foo/bar/baz/asdf'

path.basename(p, [ext])#

Return the last portion of a path. Similar to the Unix basename command.

返回路径中的最后哦一部分. 类似于Unix 的 basename 命令.

Example:

示例:

path.basename('/foo/bar/baz/asdf/quux.html', '.html')
// returns
'quux'

path.extname(p)#

Return the extension of the path, from the last '.' to end of string in the last portion of the path. If there is no '.' in the last portion of the path or the first character of it is '.', then it returns an empty string. Examples:

返回路径中文件的扩展名, 在从最后一部分中的最后一个'.'到字符串的末尾。 如果在路径的最后一部分没有'.',或者第一个字符是'.',就返回一个 空字符串。 例子:

path.extname('index')
// returns
''

path.sep#

The platform-specific file separator. '\\' or '/'.

特定平台的文件分隔工具. '\\' 或者 '/'.

An example on *nix:

*nix 上的例子:

'foo/bar/baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

An example on Windows:

Windows 上的例子:

'foo\\bar\\baz'.split(path.sep)
// returns
['foo', 'bar', 'baz']

path.delimiter#

The platform-specific path delimiter, ; or ':'.

特定平台的路径分隔符, ; 或者 ':'.

An example on *nix:

*nix 上的例子:

process.env.PATH.split(path.delimiter)
// returns
['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']

An example on Windows:

Windows 上的例子:

console.log(process.env.PATH)
// 'C:\Windows\system32;C:\Windows;C:\Program Files\nodejs\'

process.env.PATH.split(path.delimiter)
// returns
['C:\Windows\system32', 'C:\Windows', 'C:\Program Files\nodejs\']


process.env.PATH.split(path.delimiter)
// returns
['C:\Windows\system32', 'C:\Windows', 'C:\Program Files\nodejs\']

网络#

稳定度: 3 - 稳定

The net module provides you with an asynchronous network wrapper. It contains methods for creating both servers and clients (called streams). You can include this module with require('net');

net 模块封装了异步网络功能,提供了一些方法来创建服务器和客户端(称之为流)。您可以用 require('net') 来引入这个模块。

net.createServer([options], [connectionListener])#

Creates a new TCP server. The connectionListener argument is automatically set as a listener for the 'connection' event.

创建一个新的 TCP 服务器。参数 connectionListener 会被自动作为 'connection' 事件的监听器。

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ allowHalfOpen: false
}

If allowHalfOpen is true, then the socket won't automatically send a FIN packet when the other end of the socket sends a FIN packet. The socket becomes non-readable, but still writable. You should call the end() method explicitly. See 'end' event for more information.

如果允许半开连接 allowHalfOpen 被设置为 true,则当另一端的套接字发送 FIN 报文时套接字并不会自动发送 FIN 报文。套接字会变为不可读,但仍然可写。您应当明确地调用 end() 方法。详见 'end' 事件。

Here is an example of an echo server which listens for connections on port 8124:

下面是一个监听 8124 端口连接的应答服务器的例子:

var net = require('net');
var server = net.createServer(function(c) { // 'connection' 监听器
  console.log('服务器已连接');
  c.on('end', function() {
    console.log('服务器已断开');
  });
  c.write('hello\r\n');
  c.pipe(c);
});
server.listen(8124, function() { // 'listening' 监听器
  console.log('服务器已绑定');
});

Test this by using telnet:

使用 telnet 测试:

telnet localhost 8124

To listen on the socket /tmp/echo.sock the third line from the last would just be changed to

要监听套接字 /tmp/echo.sock 仅需更改倒数第三行代码:

server.listen('/tmp/echo.sock', function() { // 'listening' 监听器

Use nc to connect to a UNIX domain socket server:

使用 nc 连接到一个 UNIX domain 套接字服务器:

nc -U /tmp/echo.sock

net.connect(options, [connectionListener])#

net.createConnection(options, [connectionListener])#

net.connect(options, [connectionListener])#

net.createConnection(options, [connectionListener])#

Constructs a new socket object and opens the socket to the given location. When the socket is established, the 'connect' event will be emitted.

构建一个新的套接字对象并打开所给位置的套接字。当套接字就绪时会触发 'connect' 事件。

For TCP sockets, options argument should be an object which specifies:

对于 TCP 套接字,选项 options 参数应为一个指定下列参数的对象:

  • port: Port the client should connect to (Required).

  • port:客户端连接到的端口(必须)

  • host: Host the client should connect to. Defaults to 'localhost'.

  • host:客户端连接到的主机,缺省为 'localhost'

  • localAddress: Local interface to bind to for network connections.

  • localAddress:网络连接绑定的本地接口

  • family : Version of IP stack. Defaults to 4.

  • family:IP 栈版本,缺省为 4

For UNIX domain sockets, options argument should be an object which specifies:

对于 UNIX domain 套接字,选项 options 参数应当为一个指定下列参数的对象:

  • path: Path the client should connect to (Required).

  • path:客户端连接到的路径(必须)

Common options are:

通用选项:

  • allowHalfOpen: if true, the socket won't automatically send a FIN packet when the other end of the socket sends a FIN packet. Defaults to false. See 'end' event for more information.

  • allowHalfOpen:允许半开连接,如果被设置为 true,则当另一端的套接字发送 FIN 报文时套接字并不会自动发送 FIN 报文。缺省为 false。详见 'end' 事件。

The connectListener parameter will be added as an listener for the 'connect' event.

connectListener 用于 'connect' 事件的监听器

Here is an example of a client of echo server as described previously:

下面是一个上述应答服务器的客户端的例子:

var net = require('net');
var client = net.connect({port: 8124},
    function() { //'connect' 监听器
  console.log('client connected');
  client.write('world!\r\n');
});
client.on('data', function(data) {
  console.log(data.toString());
  client.end();
});
client.on('end', function() {
  console.log('客户端断开连接');
});

To connect on the socket /tmp/echo.sock the second line would just be changed to

要连接到套接字 /tmp/echo.sock,仅需将第二行改为

var client = net.connect({path: '/tmp/echo.sock'},

net.connect(port, [host], [connectListener])#

net.createConnection(port, [host], [connectListener])#

net.connect(port, [host], [connectListener])#

net.createConnection(port, [host], [connectListener])#

Creates a TCP connection to port on host. If host is omitted, 'localhost' will be assumed. The connectListener parameter will be added as an listener for the 'connect' event.

创建一个 host 主机 port 端口的 TCP 连接。如果省略 host 则假定为 'localhost'connectListener 参数会被用作 'connect' 事件的监听器。

net.connect(path, [connectListener])#

net.createConnection(path, [connectListener])#

net.connect(path, [connectListener])#

net.createConnection(path, [connectListener])#

Creates unix socket connection to path. The connectListener parameter will be added as an listener for the 'connect' event.

创建一个到路径 path 的 UNIX 套接字连接。connectListener 参数会被用作 'connect' 事件的监听器。

类: net.Server#

This class is used to create a TCP or UNIX server. A server is a net.Socket that can listen for new incoming connections.

该类用于创建一个 TCP 或 UNIX 服务器。服务器本质上是一个可监听传入连接的 net.Socket

server.listen(port, [host], [backlog], [callback])#

Begin accepting connections on the specified port and host. If the host is omitted, the server will accept connections directed to any IPv4 address (INADDR_ANY). A port value of zero will assign a random port.

在指定端口 port 和主机 host 上开始接受连接。如果省略 host 则服务器会接受来自所有 IPv4 地址(INADDR_ANY)的连接;端口为 0 则会使用分随机分配的端口。

Backlog is the maximum length of the queue of pending connections. The actual length will be determined by your OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on linux. The default value of this parameter is 511 (not 512).

积压量 backlog 为连接等待队列的最大长度。实际长度由您的操作系统通过 sysctl 设置决定,比如 Linux 上的 tcp_max_syn_backlogsomaxconn。该参数缺省值为 511(不是 512)。

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. The last parameter callback will be added as an listener for the 'listening' event.

这是一个异步函数。当服务器已被绑定时会触发 'listening' 事件。最后一个参数 callback 会被用作 'listening' 事件的监听器。

One issue some users run into is getting EADDRINUSE errors. This means that another server is already running on the requested port. One way of handling this would be to wait a second and then try again. This can be done with

有些用户会遇到的情况是遇到 'EADDINUSE' 错误。这表示另一个服务器已经运行在所请求的端口上。一个处理这种情况的方法是等待一段时间再重试

server.on('error', function (e) {
  if (e.code == 'EADDRINUSE') {
    console.log('地址被占用,重试中...');
    setTimeout(function () {
      server.close();
      server.listen(PORT, HOST);
    }, 1000);
  }
});

(Note: All sockets in Node set SO_REUSEADDR already)

(注意:Node 中的所有套接字已设置了 SO_REUSEADDR

server.listen(path, [callback])#

Start a UNIX socket server listening for connections on the given path.

启动一个 UNIX 套接字服务器在所给路径 path 上监听连接。

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. The last parameter callback will be added as an listener for the 'listening' event.

这是一个异步函数。当服务器已被绑定时会触发 'listening' 事件。最后一个参数 callback 会被用作 'listening' 事件的监听器。

server.listen(handle, [callback])#

  • handle Object
  • callback Function

  • handle处理器 Object

  • callback回调函数 Function

The handle object can be set to either a server or socket (anything with an underlying _handle member), or a {fd: <n>} object.

handle 变量可以被设置为server 或者 socket(任一以下划线开头的成员 _handle), 或者一个 {fd: <n>} 对象

This will cause the server to accept connections on the specified handle, but it is presumed that the file descriptor or handle has already been bound to a port or domain socket.

这将使服务器用指定的句柄接受连接,但它假设文件描述符或者句柄已经被绑定在特定的端口或者域名套接字。

Listening on a file descriptor is not supported on Windows.

Windows 不支持监听一个文件描述符。

This function is asynchronous. When the server has been bound, 'listening' event will be emitted. the last parameter callback will be added as an listener for the 'listening' event.

这是一个异步函数。当服务器已被绑定时会触发 'listening' 事件。最后一个参数 callback 会被用作 'listening' 事件的监听器。

server.close([callback])#

Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a 'close' event. Optionally, you can pass a callback to listen for the 'close' event.

用于停止服务器接受新连接,但保持已存在的连接。这是一个异步函数, 服务器将在所有的连接都结束后关闭,并且服务器发送 'close'事件 你可以有选择的传入回调函数来监听 'close'事件。

server.address()#

Returns the bound address, the address family name and port of the server as reported by the operating system. Useful to find which port was assigned when giving getting an OS-assigned address. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

返回操作系统报告的绑定的地址,协议族和端口。 对查找操作系统分配的地址哪个端口已被分配非常有用, 如. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

Example:

示例:

// 获得随机端口
server.listen(function() {
  address = server.address();
  console.log("opened server on %j", address);
});

Don't call server.address() until the 'listening' event has been emitted.

'listening' 事件发生前请勿调用 server.address()

server.unref()#

Calling unref on a server will allow the program to exit if this is the only active server in the event system. If the server is already unrefd calling unref again will have no effect.

如果这是事件系统中唯一一个活动的服务器,调用 unref 将允许程序退出。如果服务器已被 unref,则再次调用 unref 并不会产生影响。

server.ref()#

Opposite of unref, calling ref on a previously unrefd server will not let the program exit if it's the only server left (the default behavior). If the server is refd calling ref again will have no effect.

unref 相反,如果这是仅剩的服务器,在一个之前被 unref 了的服务器上调用 ref 将不会让程序退出(缺省行为)。如果服务器已经被 ref,则再次调用 ref 并不会产生影响。

server.maxConnections#

Set this property to reject connections when the server's connection count gets high.

设置这个选项能在当服务器连接数超过数量时拒绝连接。

It is not recommended to use this option once a socket has been sent to a child with child_process.fork().

这个选项不推荐使用在套接字已经用 child_process.fork()发送给子进程。

server.connections#

This function is deprecated; please use [server.getConnections()][] instead. The number of concurrent connections on the server.

这个函数已被 废弃; 请用 [server.getConnections()][] 代替. 服务器的当前活动连接的数量。

This becomes null when sending a socket to a child with child_process.fork(). To poll forks and get current number of active connections use asynchronous server.getConnections instead.

当用child_process.fork()发送一个套接字给子进程时,它将是 null 。 要轮询子进程来获取当前活动的连接请用 server.getConnections 代替.

net.Server is an EventEmitter with the following events:

net.Server 是一个包含下列事件的 EventEmitter :

server.getConnections(callback)#

Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.

异步获取服务器当前活跃的连接数. 用于套接字呗发送给子进程。

Callback should take two arguments err and count.

回调函数需要两个参数 errcount.

事件: 'listening'#

Emitted when the server has been bound after calling server.listen.

在服务器调用 server.listen绑定后触发。

事件: 'connection'#

  • Socket object The connection object

  • Socket object 连接对象

Emitted when a new connection is made. socket is an instance of net.Socket.

在一个新连接被创建时触发。 socket 是一个net.Socket的实例。

事件: 'close'#

Emitted when the server closes. Note that if connections exist, this event is not emitted until all connections are ended.

当服务被关闭时触发. 注意:如果当前仍有活动连接,他个事件将等到所有连接都结束后才触发。

事件: 'error'#

  • Error Object

  • Error Object

Emitted when an error occurs. The 'close' event will be called directly following this event. See example in discussion of server.listen.

当一个错误发生时触发。 'close' 事件将直接被下列时间调用。 请查看讨论 server.listen的例子。

类: net.Socket#

This object is an abstraction of a TCP or UNIX socket. net.Socket instances implement a duplex Stream interface. They can be created by the user and used as a client (with connect()) or they can be created by Node and passed to the user through the 'connection' event of a server.

这个对象是一个TCP或UNIX套接字的抽象。 net.Socket 实例实现了一个双工流接口。 他们可以被用户使用在客户端(使用 connect()) 或者它们可以由 Node创建,并通过 'connection'服务器事件传递给用户。

new net.Socket([options])#

Construct a new socket object.

构造一个新的套接字对象。

options is an object with the following defaults:

options 是一个包含下列缺省值的对象:

{ fd: null
  type: null
  allowHalfOpen: false
}

fd allows you to specify the existing file descriptor of socket. type specified underlying protocol. It can be 'tcp4', 'tcp6', or 'unix'. About allowHalfOpen, refer to createServer() and 'end' event.

fd 允许你指定一个存在的文件描述符和套接字。 type 指定一个优先的协议。 他可以是 'tcp4', 'tcp6', 或 'unix'. 关于 allowHalfOpen, 参见 createServer()'end' 事件。

socket.connect(port, [host], [connectListener])#

socket.connect(path, [connectListener])#

socket.connect(port, [host], [connectListener])#

socket.connect(path, [connectListener])#

Opens the connection for a given socket. If port and host are given, then the socket will be opened as a TCP socket, if host is omitted, localhost will be assumed. If a path is given, the socket will be opened as a unix socket to that path.

使用传入的套接字打开一个连接 如果 porthost 都被传入, 那么套接字将会被已TCP套接字打开,如果 host 被省略, 默认为localhost . 如果 path 被传入, 套接字将会被已指定路径UNIX套接字打开。

Normally this method is not needed, as net.createConnection opens the socket. Use this only if you are implementing a custom Socket.

一般情况下这个函数是不需要使用, 比如用 net.createConnection 打开套接字. 只有在您实现了自定义套接字时候才需要。

This function is asynchronous. When the 'connect' event is emitted the socket is established. If there is a problem connecting, the 'connect' event will not be emitted, the 'error' event will be emitted with the exception.

这是一个异步函数。 当 'connect' 触发了的套接字是established状态 .或者在连接的时候出现了一个问题, 'connect' 事件不会被触发, 而 'error' 事件会触发并发送异常信息。

The connectListener parameter will be added as an listener for the 'connect' event.

connectListener 用于 'connect' 事件的监听器

socket.bufferSize#

net.Socket has the property that socket.write() always works. This is to help users get up and running quickly. The computer cannot always keep up with the amount of data that is written to a socket - the network connection simply might be too slow. Node will internally queue up the data written to a socket and send it out over the wire when it is possible. (Internally it is polling on the socket's file descriptor for being writable).

是一个net.Socket 的属性,用于 socket.write() . 用于帮助用户获取更快的运行速度。 计算机不能一直处于大量数据被写入状态 —— 网络链接可能会变得过慢。 Node 在内部会排队等候数据被写入套接字并确保传输连接上的数据完好。 (内部实现为:轮询套接字的文件描述符等待它为可写).

The consequence of this internal buffering is that memory may grow. This property shows the number of characters currently buffered to be written. (Number of characters is approximately equal to the number of bytes to be written, but the buffer may contain strings, and the strings are lazily encoded, so the exact number of bytes is not known.)

内部缓冲的可能后果是内存使用会增加。这个属性表示了现在处于缓冲区等待被写入的字符数。(字符的数目约等于要被写入的字节数,但是缓冲区可能包含字符串,而字符串是惰性编码的,所以确切的字节数是未知的。)

Users who experience large or growing bufferSize should attempt to "throttle" the data flows in their program with pause() and resume().

遇到数值很大或者增长很快的bufferSize的时候,用户应该尝试用pause()resume()来控制数据流。

socket.setEncoding([encoding])#

Set the encoding for the socket as a Readable Stream. See stream.setEncoding() for more information.

设置套接字的编码为一个可读流. 更多信息请查看 stream.setEncoding()

socket.write(data, [encoding], [callback])#

Sends data on the socket. The second parameter specifies the encoding in the case of a string--it defaults to UTF8 encoding.

在套接字上发送数据。第二参数指明了使用字符串时的编码方式-默认为UTF8编码。

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is again free.

如果所有数据被成功刷新到内核缓冲区,则返回true。如果所有或部分数据在用户内存里还处于队列中,则返回false。当缓冲区再次被释放时,'drain'事件会被分发。 'drain' will be emitted when the buffer is again free.

The optional callback parameter will be executed when the data is finally written out - this may not be immediately.

当数据最终被完整写入时,可选的callback参数会被执行 - 但不一定是马上执行。

socket.end([data], [encoding])#

Half-closes the socket. i.e., it sends a FIN packet. It is possible the server will still send some data.

半关闭套接字 如., 它发送一个 FIN 包 .可能服务器仍在发送数据。

If data is specified, it is equivalent to calling socket.write(data, encoding) followed by socket.end().

如果 data被传入, 等同于调用 socket.write(data, encoding) 然后调用 socket.end().

socket.destroy()#

Ensures that no more I/O activity happens on this socket. Only necessary in case of errors (parse error or so).

确保没有I/O活动在这个套接字。 只有在错误发生情况下才需要(处理错误等等)。

socket.pause()#

Pauses the reading of data. That is, 'data' events will not be emitted. Useful to throttle back an upload.

暂停读取数据。 'data' 事件不会被触发。 对于控制上传非常有用。

socket.resume()#

Resumes reading after a call to pause().

在调用 pause()后恢复读操作。

socket.setTimeout(timeout, [callback])#

Sets the socket to timeout after timeout milliseconds of inactivity on the socket. By default net.Socket do not have a timeout.

如果套接字超过timeout毫秒处于闲置状态,则将套接字设为超时。默认情况下net.Socket不存在超时。

When an idle timeout is triggered the socket will receive a 'timeout' event but the connection will not be severed. The user must manually end() or destroy() the socket.

当一个闲置超时被触发时,套接字会接收到一个'timeout'事件,但是连接将不会被断开。用户必须手动end()destroy()这个套接字。

If timeout is 0, then the existing idle timeout is disabled.

如果timeout为0,那么现有的闲置超时会被禁用。

The optional callback parameter will be added as a one time listener for the 'timeout' event.

可选的callback参数将会被添加成为'timeout'事件的一次性监听器。

socket.setNoDelay([noDelay])#

Disables the Nagle algorithm. By default TCP connections use the Nagle algorithm, they buffer data before sending it off. Setting true for noDelay will immediately fire off data each time socket.write() is called. noDelay defaults to true.

禁用纳格(Nagle)算法。默认情况下TCP连接使用纳格算法,这些连接在发送数据之前对数据进行缓冲处理。 将noDelay设成true会在每次socket.write()被调用时立刻发送数据。noDelay默认为true

socket.setKeepAlive([enable], [initialDelay])#

Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket. enable defaults to false.

禁用/启用长连接功能,并在第一个在闲置套接字上的长连接probe被发送之前,可选地设定初始延时。enable默认为false

Set initialDelay (in milliseconds) to set the delay between the last data packet received and the first keepalive probe. Setting 0 for initialDelay will leave the value unchanged from the default (or previous) setting. Defaults to 0.

设定initialDelay (毫秒),来设定在收到的最后一个数据包和第一个长连接probe之间的延时。将initialDelay设成0会让值保持不变(默认值或之前所设的值)。默认为0

socket.address()#

Returns the bound address, the address family name and port of the socket as reported by the operating system. Returns an object with three properties, e.g. { port: 12346, family: 'IPv4', address: '127.0.0.1' }

返回 socket 绑定的IP地址, 协议类型 (family name) 以及 端口号 (port). 具体是一个包含三个属性的对象, 形如 { port: 12346, family: 'IPv4', address: '127.0.0.1' }

socket.unref()#

Calling unref on a socket will allow the program to exit if this is the only active socket in the event system. If the socket is already unrefd calling unref again will have no effect.

如果这是事件系统中唯一一个活动的套接字,调用 unref 将允许程序退出。如果套接字已被 unref,则再次调用 unref 并不会产生影响。

socket.ref()#

Opposite of unref, calling ref on a previously unrefd socket will not let the program exit if it's the only socket left (the default behavior). If the socket is refd calling ref again will have no effect.

unref 相反,如果这是仅剩的套接字,在一个之前被 unref 了的套接字上调用 ref不会让程序退出(缺省行为)。如果一个套接字已经被 ref,则再次调用 ref 并不会产生影响。

socket.remoteAddress#

The string representation of the remote IP address. For example, '74.125.127.100' or '2001:4860:a005::68'.

远程IP地址的字符串表示。例如,'74.125.127.100''2001:4860:a005::68'

socket.remotePort#

The numeric representation of the remote port. For example, 80 or 21.

远程端口的数值表示。例如,8021

socket.localAddress#

The string representation of the local IP address the remote client is connecting on. For example, if you are listening on '0.0.0.0' and the client connects on '192.168.1.1', the value would be '192.168.1.1'.

远程客户端正在连接的本地IP地址的字符串表示。例如,如果你在监听'0.0.0.0'而客户端连接在'192.168.1.1',这个值就会是 '192.168.1.1'

socket.localPort#

The numeric representation of the local port. For example, 80 or 21.

本地端口的数值表示。比如8021

socket.bytesRead#

The amount of received bytes.

所接收的字节数。

socket.bytesWritten#

The amount of bytes sent.

所发送的字节数。

net.Socket instances are EventEmitter with the following events:

net.Socket实例是带有以下事件的EventEmitter对象:

事件: 'lookup'#

Emitted after resolving the hostname but before connecting. Not applicable to UNIX sockets.

这个事件在解析主机名之后,连接主机之前被分发。对UNIX套接字不适用。

  • err {Error | Null} The error object. See [dns.lookup()][].
  • address {String} The IP address.
  • family {String | Null} The address type. See [dns.lookup()][].

  • err {Error | Null} 错误对象。见[dns.lookup()][]。

  • address {String} IP地址。
  • family {String | Null} 得知类型。见[dns.lookup()][]。

事件: 'connect'#

Emitted when a socket connection is successfully established. See connect().

该事件在一个套接字连接成功建立后被分发。见connect()

事件: 'data'#

  • Buffer object

  • Buffer object

Emitted when data is received. The argument data will be a Buffer or String. Encoding of data is set by socket.setEncoding(). (See the Readable Stream section for more information.)

当收到数据时被分发。data参数会是一个BufferString对象。数据的编码方式由socket.setEncoding()设定。 (详见 [可读流][] 章节)

Note that the data will be lost if there is no listener when a Socket emits a 'data' event.

请注意,如果一个Socket对象分发一个'data'事件时没有任何监听器存在,则 数据会丢失

事件: 'end'#

Emitted when the other end of the socket sends a FIN packet.

当套接字的另一端发送FIN包时,该事件被分发。

By default (allowHalfOpen == false) the socket will destroy its file descriptor once it has written out its pending write queue. However, by setting allowHalfOpen == true the socket will not automatically end() its side allowing the user to write arbitrary amounts of data, with the caveat that the user is required to end() their side now.

默认情况下 (allowHalfOpen == false),当套接字完成待写入队列中的任务时,它会destroy文件描述符。然而,如果把allowHalfOpen设成true,那么套接字将不会从它这边自动调用end(),使得用户可以随意写入数据,但同时使得用户自己需要调用end()

事件: 'timeout'#

Emitted if the socket times out from inactivity. This is only to notify that the socket has been idle. The user must manually close the connection.

当套接字因为非活动状态而超时时该事件被分发。这只是用来表明套接字处于空闲状态。用户必须手动关闭这个连接。

See also: socket.setTimeout()

参阅:socket.setTimeout()

事件: 'drain'#

Emitted when the write buffer becomes empty. Can be used to throttle uploads.

当写入缓冲被清空时产生。可被用于控制上传流量。

See also: the return values of socket.write()

参阅:socket.write() 的返回值

事件: 'error'#

  • Error object

  • Error object

Emitted when an error occurs. The 'close' event will be called directly following this event.

当一个错误发生时产生。'close' 事件会紧接着该事件被触发。

事件: 'close'#

  • had_error Boolean true if the socket had a transmission error

  • had_error Boolean 如果套接字发生了传输错误则此字段为true

Emitted once the socket is fully closed. The argument had_error is a boolean which says if the socket was closed due to a transmission error.

当套接字完全关闭时该事件被分发。参数had_error是一个布尔值,表示了套接字是否因为一个传输错误而被关闭。

net.isIP(input)#

Tests if input is an IP address. Returns 0 for invalid strings, returns 4 for IP version 4 addresses, and returns 6 for IP version 6 addresses.

测试 input 是否 IP 地址。无效字符串返回 0;IP 版本 4 地址返回 4;IP 版本 6 地址返回 6。

net.isIPv4(input)#

Returns true if input is a version 4 IP address, otherwise returns false.

如果 input 为版本 4 地址则返回 true,否则返回 false。

net.isIPv6(input)#

Returns true if input is a version 6 IP address, otherwise returns false.

如果 input 为版本 6 地址则返回 true,否则返回 false。

UDP / 数据报套接字#

稳定度: 3 - 稳定

Datagram sockets are available through require('dgram').

数据报套接字通过 require('dgram') 提供。

Important note: the behavior of dgram.Socket#bind() has changed in v0.10 and is always asynchronous now. If you have code that looks like this:

重要提醒:dgram.Socket#bind() 的行为在 v0.10 中已改变,并且现在它总是异步的。如果您的代码看起来像这样:

var s = dgram.createSocket('udp4');
s.bind(1234);
s.addMembership('224.0.0.114');

You have to change it to this:

您需要将它改成这样:

var s = dgram.createSocket('udp4');
s.bind(1234, function() {
  s.addMembership('224.0.0.114');
});

dgram.createSocket(type, [callback])#

  • type String. Either 'udp4' or 'udp6'
  • callback Function. Attached as a listener to message events. Optional
  • Returns: Socket object

  • type String 可以是 'udp4' 或 'udp6'

  • callback Function 可选,会被作为 message 事件的监听器。
  • 返回:Socket 对象

Creates a datagram Socket of the specified types. Valid types are udp4 and udp6.

创建一个指定类型的数据报 Socket。有效类型包括 udp4udp6

Takes an optional callback which is added as a listener for message events.

接受一个可选的回调,会被添加为 message 事件的监听器。

Call socket.bind if you want to receive datagrams. socket.bind() will bind to the "all interfaces" address on a random port (it does the right thing for both udp4 and udp6 sockets). You can then retrieve the address and port with socket.address().address and socket.address().port.

如果您想接收数据报则可调用 socket.bindsocket.bind() 会绑定到“所有网络接口”地址的一个随机端口(udp4udp6 皆是如此)。然后您可以通过 socket.address().addresssocket.address().port 来取得地址和端口。

类: dgram.Socket#

The dgram Socket class encapsulates the datagram functionality. It should be created via dgram.createSocket(type, [callback]).

dgram Socket 类封装了数据报功能,可以通过 dgram.createSocket(type, [callback]) 创建。

事件: 'message'#

  • msg Buffer object. The message
  • rinfo Object. Remote address information

  • msg Buffer 对象,消息

  • rinfo Object,远程地址信息

Emitted when a new datagram is available on a socket. msg is a Buffer and rinfo is an object with the sender's address information:

当套接字中有新的数据报时发生。msg 是一个 Bufferrinfo 是一个包含了发送者地址信息的对象:

socket.on('message', function(msg, rinfo) {
  console.log('收到 %d 字节,来自 %s:%d\n',
              msg.length, rinfo.address, rinfo.port);
});

事件: 'listening'#

Emitted when a socket starts listening for datagrams. This happens as soon as UDP sockets are created.

当一个套接字开始监听数据报时产生。它会在 UDP 套接字被创建时发生。

事件: 'close'#

Emitted when a socket is closed with close(). No new message events will be emitted on this socket.

当一个套接字被 close() 关闭时产生。之后这个套接字上不会再有 message 事件发生。

事件: 'error'#

  • exception Error object

  • exception Error 对象

Emitted when an error occurs.

当发生错误时产生。

socket.send(buf, offset, length, port, address, [callback])#

  • buf Buffer object. Message to be sent
  • offset Integer. Offset in the buffer where the message starts.
  • length Integer. Number of bytes in the message.
  • port Integer. destination port
  • address String. destination IP
  • callback Function. Callback when message is done being delivered. Optional.

  • buf Buffer 对象,要发送的消息

  • offset Integer,Buffer 中消息起始偏移值。
  • length Integer,消息的字节数。
  • port Integer,目标端口
  • address String,目标 IP
  • callback Function,可选,当消息被投递后的回调。

For UDP sockets, the destination port and IP address must be specified. A string may be supplied for the address parameter, and it will be resolved with DNS. An optional callback may be specified to detect any DNS errors and when buf may be re-used. Note that DNS lookups will delay the time that a send takes place, at least until the next tick. The only way to know for sure that a send has taken place is to use the callback.

对于 UDP 套接字,必须指定目标端口和 IP 地址。address 参数可以是一个字符串,它会被 DNS 解析。可选地可以指定一个回调以用于发现任何 DNS 错误或当 buf 可被重用。请注意 DNS 查询会将发送的时间推迟到至少下一个事件循环。确认发送完毕的唯一已知方法是使用回调。

If the socket has not been previously bound with a call to bind, it's assigned a random port number and bound to the "all interfaces" address (0.0.0.0 for udp4 sockets, ::0 for udp6 sockets).

如果套接字之前并未被调用 bind 绑定,则它会被分配一个随机端口并绑定到“所有网络接口”地址(udp4 套接字是 0.0.0.0;udp6 套接字是 ::0)。

Example of sending a UDP packet to a random port on localhost;

localhost 随机端口发送 UDP 报文的例子:

var dgram = require('dgram');
var message = new Buffer("Some bytes");
var client = dgram.createSocket("udp4");
client.send(message, 0, message.length, 41234, "localhost", function(err) {
  client.close();
});

A Note about UDP datagram size

关于 UDP 数据报大小的注意事项

The maximum size of an IPv4/v6 datagram depends on the MTU (Maximum Transmission Unit) and on the Payload Length field size.

一个 IPv4/v6 数据报的最大大小取决与 MTU最大传输单位)和 Payload Length 字段大小。

  • The Payload Length field is 16 bits wide, which means that a normal payload cannot be larger than 64K octets including internet header and data (65,507 bytes = 65,535 − 8 bytes UDP header − 20 bytes IP header); this is generally true for loopback interfaces, but such long datagrams are impractical for most hosts and networks.

  • Payload Length 字段宽 16 bits,意味着正常负载包括网络头和数据不能大于 64K(65,507 字节 = 65,535 − 8 字节 UDP 头 − 20 字节 IP 头);这对环回接口通常是真的,但如此大的数据报对大多数主机和网络来说是不切实际的。

  • The MTU is the largest size a given link layer technology can support for datagrams. For any link, IPv4 mandates a minimum MTU of 68 octets, while the recommended MTU for IPv4 is 576 (typically recommended as the MTU for dial-up type applications), whether they arrive whole or in fragments.

  • MTU 是一个给定的数据链路层技术能为数据报提供支持的最大大小。对于任何连接,IPv4 允许最小 68 字节的 MTU,而 IPv4 所推荐的 MTU576(通常作为拨号类应用的推荐 MTU),无论它们是完整接收还是分片。

    For IPv6, the minimum MTU is 1280 octets, however, the mandatory minimum fragment reassembly buffer size is 1500 octets. The value of 68 octets is very small, since most current link layer technologies have a minimum MTU of 1500 (like Ethernet).

    对于 IPv6,最小的 MTU1280 字节,但所允许的最小碎片重组缓冲大小为 1500 字节。 68 的值是非常小的,因为现在大多数数据链路层技术有都具有 1500 的最小 MTU(比如以太网)。

Note that it's impossible to know in advance the MTU of each link through which a packet might travel, and that generally sending a datagram greater than the (receiver) MTU won't work (the packet gets silently dropped, without informing the source that the data did not reach its intended recipient).

请注意我们不可能提前得知一个报文可能经过的每一个连接 MTU,因此通常情况下不能发送一个大于(接收者的)MTU 的数据报(报文会被悄悄地丢掉,而不会将数据没有到达它意图的接收者的消息告知来源)。

socket.bind(port, [address], [callback])#

  • port Integer
  • address String, Optional
  • callback Function with no parameters, Optional. Callback when binding is done.

  • port Integer

  • address String,可选
  • callback 没有参数的 Function,可选,当绑定完成时被调用。

For UDP sockets, listen for datagrams on a named port and optional address. If address is not specified, the OS will try to listen on all addresses. After binding is done, a "listening" event is emitted and the callback(if specified) is called. Specifying both a "listening" event listener and callback is not harmful but not very useful.

对于 UDP 套接字,在一个具名端口 port 和可选的地址 address 上监听数据报。如果 address 未指定,则操作系统会尝试监听所有地址。当绑定完成后,一个 "listening" 事件会发生,并且回调 callback(如果指定)会被调用。同时指定 "listening" 事件监听器和 callback 并不会产生副作用,但也没什么用。

A bound datagram socket keeps the node process running to receive datagrams.

一个绑定了的数据报套接字会保持 node 进程运行来接收数据报。

If binding fails, an "error" event is generated. In rare case (e.g. binding a closed socket), an Error may be thrown by this method.

如果绑定失败,则一个 "error" 事件会被产生。在极少情况下(比如绑定一个已关闭的套接字),该方法会抛出一个 Error

Example of a UDP server listening on port 41234:

一个监听端口 41234 的 UDP 服务器的例子:

server.bind(41234);
// 服务器正在监听 0.0.0.0:41234

socket.close()#

Close the underlying socket and stop listening for data on it.

关闭底层套接字并停止监听数据。

socket.address()#

Returns an object containing the address information for a socket. For UDP sockets, this object will contain address , family and port.

返回一个包含了套接字地址信息的对象。对于 UDP 套接字,该对象会包含地址 address、地址族 family 和端口号 port

socket.setBroadcast(flag)#

  • flag Boolean

  • flag Boolean

Sets or clears the SO_BROADCAST socket option. When this option is set, UDP packets may be sent to a local interface's broadcast address.

设置或清除 SO_BROADCAST 套接字选项。当该选项被设置,则 UDP 报文可能被发送到一个本地接口的广播地址。

socket.setTTL(ttl)#

  • ttl Integer

  • ttl Integer

Sets the IP_TTL socket option. TTL stands for "Time to Live," but in this context it specifies the number of IP hops that a packet is allowed to go through. Each router or gateway that forwards a packet decrements the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded. Changing TTL values is typically done for network probes or when multicasting.

设置 IP_TTL 套接字选项。TTL 表示“Time to Live”(生存时间),但在此上下文中它指的是报文允许通过的 IP 跃点数。各个转发报文的路由器或网关都会递减 TTL。如果 TTL 被一个路由器递减到 0,则它将不会被转发。改变 TTL 值通常被用于网络探测器或多播。

The argument to setTTL() is a number of hops between 1 and 255. The default on most systems is 64.

setTTL() 的参数为介于 1 至 255 的跃点数。在大多数系统上缺省值为 64。

socket.setMulticastTTL(ttl)#

  • ttl Integer

  • ttl Integer

Sets the IP_MULTICAST_TTL socket option. TTL stands for "Time to Live," but in this context it specifies the number of IP hops that a packet is allowed to go through, specifically for multicast traffic. Each router or gateway that forwards a packet decrements the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded.

设置 IP_MULTICAST_TTL 套接字选项。TTL 表示“Time to Live”(生存时间),但在此上下文中它指的是报文允许通过的 IP 跃点数,特别是组播流量。各个转发报文的路由器或网关都会递减 TTL。如果 TTL 被一个路由器递减到 0,则它将不会被转发。

The argument to setMulticastTTL() is a number of hops between 0 and 255. The default on most systems is 1.

setMulticastTTL() 的参数为介于 1 至 255 的跃点数。在大多数系统上缺省值为 1。

socket.setMulticastLoopback(flag)#

  • flag Boolean

  • flag Boolean

Sets or clears the IP_MULTICAST_LOOP socket option. When this option is set, multicast packets will also be received on the local interface.

设置或清除 IP_MULTICAST_LOOP 套接字选项。当该选项被设置时,组播报文也会被本地接口收到。

socket.addMembership(multicastAddress, [multicastInterface])#

  • multicastAddress String
  • multicastInterface String, Optional

  • multicastAddress String

  • multicastInterface String,可选

Tells the kernel to join a multicast group with IP_ADD_MEMBERSHIP socket option.

IP_ADD_MEMBERSHIP 套接字选项告诉内核加入一个组播分组。

If multicastInterface is not specified, the OS will try to add membership to all valid interfaces.

如果未指定 multicastInterface,则操作系统会尝试向所有有效接口添加关系。

socket.dropMembership(multicastAddress, [multicastInterface])#

  • multicastAddress String
  • multicastInterface String, Optional

  • multicastAddress String

  • multicastInterface String,可选

Opposite of addMembership - tells the kernel to leave a multicast group with IP_DROP_MEMBERSHIP socket option. This is automatically called by the kernel when the socket is closed or process terminates, so most apps will never need to call this.

addMembership 相反,以 IP_DROP_MEMBERSHIP 套接字选项告诉内核退出一个组播分组。当套接字被关闭或进程结束时内核会自动调用,因此大多数应用都没必要调用它。

If multicastInterface is not specified, the OS will try to drop membership to all valid interfaces.

如果未指定 multicastInterface,则操作系统会尝试向所有有效接口移除关系。

socket.unref()#

Calling unref on a socket will allow the program to exit if this is the only active socket in the event system. If the socket is already unrefd calling unref again will have no effect.

如果这是事件系统中唯一一个活动的套接字,调用 unref 将允许程序退出。如果套接字已被 unref,则再次调用 unref 并不会产生影响。

socket.ref()#

Opposite of unref, calling ref on a previously unrefd socket will not let the program exit if it's the only socket left (the default behavior). If the socket is refd calling ref again will have no effect.

unref 相反,如果这是仅剩的套接字,在一个之前被 unref 了的套接字上调用 ref不会让程序退出(缺省行为)。如果一个套接字已经被 ref,则再次调用 ref 并不会产生影响。

DNS#

稳定度: 3 - 稳定

Use require('dns') to access this module. All methods in the dns module use C-Ares except for dns.lookup which uses getaddrinfo(3) in a thread pool. C-Ares is much faster than getaddrinfo but the system resolver is more constant with how other programs operate. When a user does net.connect(80, 'google.com') or http.get({ host: 'google.com' }) the dns.lookup method is used. Users who need to do a large number of lookups quickly should use the methods that go through C-Ares.

使用 require('dns') 引入此模块。dns 模块中的所有方法都使用了 C-Ares,除了 dns.lookup 使用了线程池中的 getaddrinfo(3)。C-Ares 比 getaddrinfo 要快得多,但系统解析器相对于其它程序的操作要更固定。当一个用户使用 net.connect(80, 'google.com')http.get({ host: 'google.com' }) 时会使用 dns.lookup 方法。如果用户需要进行大量的快速查询,则最好使用 C-Ares 提供的方法。

Here is an example which resolves 'www.google.com' then reverse resolves the IP addresses which are returned.

下面是一个解析 'www.google.com' 并反向解析所返回 IP 地址的例子。

      console.log('反向解析 ' + a + ': ' + JSON.stringify(domains));
    });
  });
});

dns.lookup(domain, [family], callback)#

Resolves a domain (e.g. 'google.com') into the first found A (IPv4) or AAAA (IPv6) record. The family can be the integer 4 or 6. Defaults to null that indicates both Ip v4 and v6 address family.

将一个域名(比如 'google.com')解析为第一个找到的 A 记录(IPv4)或 AAAA 记录(IPv6)。地址族 family 可以是数字 46,缺省为 null 表示同时允许 IPv4 和 IPv6 地址族。

The callback has arguments (err, address, family). The address argument is a string representation of a IP v4 or v6 address. The family argument is either the integer 4 or 6 and denotes the family of address (not necessarily the value initially passed to lookup).

回调参数为 (err, address, family)。地址 address 参数为一个代表 IPv4 或 IPv6 地址的字符串。地址族 family 参数为数字 4 或 6,地表 address 的地址族(不一定是之前传入 lookup 的值)。

On error, err is an Error object, where err.code is the error code. Keep in mind that err.code will be set to 'ENOENT' not only when the domain does not exist but also when the lookup fails in other ways such as no available file descriptors.

当错误发生时,err 为一个 Error 对象,其中 err.code 为错误代码。请记住 err.code 被设定为 'ENOENT' 的情况不仅是域名不存在,也可能是查询在其它途径出错,比如没有可用文件描述符时。

dns.resolve(domain, [rrtype], callback)#

Resolves a domain (e.g. 'google.com') into an array of the record types specified by rrtype. Valid rrtypes are 'A' (IPV4 addresses, default), 'AAAA' (IPV6 addresses), 'MX' (mail exchange records), 'TXT' (text records), 'SRV' (SRV records), 'PTR' (used for reverse IP lookups), 'NS' (name server records) and 'CNAME' (canonical name records).

将一个域名(比如 'google.com')解析为一个 rrtype 指定记录类型的数组。有效 rrtypes 取值有 'A'(IPv4 地址,缺省)、'AAAA'(IPv6 地址)、'MX'(邮件交换记录)、'TXT'(文本记录)、'SRV'(SRV 记录)、'PTR'(用于 IP 反向查找)、'NS'(域名服务器记录)和 'CNAME'(别名记录)。

The callback has arguments (err, addresses). The type of each item in addresses is determined by the record type, and described in the documentation for the corresponding lookup methods below.

回调参数为 (err, addresses)。其中 addresses 中每一项的类型取决于记录类型,详见下文对应的查找方法。

On error, err is an Error object, where err.code is one of the error codes listed below.

当出错时,err 参数为一个 Error 对象,其中 err.code 为下文所列出的错误代码之一。

dns.resolve4(domain, callback)#

The same as dns.resolve(), but only for IPv4 queries (A records). addresses is an array of IPv4 addresses (e.g. ['74.125.79.104', '74.125.79.105', '74.125.79.106']).

dns.resolve() 一样,但只用于查询 IPv4(A 记录)。addresses 是一个 IPv4 地址的数组(比如 ['74.125.79.104', '74.125.79.105', '74.125.79.106'])。

dns.resolve6(domain, callback)#

The same as dns.resolve4() except for IPv6 queries (an AAAA query).

类似于 dns.resolve4(),但用于 IPv6(AAAA)查询。

dns.resolveMx(domain, callback)#

The same as dns.resolve(), but only for mail exchange queries (MX records).

类似于 dns.resolve(),但用于邮件交换查询(MX 记录)。

addresses is an array of MX records, each with a priority and an exchange attribute (e.g. [{'priority': 10, 'exchange': 'mx.example.com'},...]).

addresses 为一个 MX 记录的数组,每一项包含优先级和交换属性(比如 [{'priority': 10, 'exchange': 'mx.example.com'},...])。

dns.resolveTxt(domain, callback)#

The same as dns.resolve(), but only for text queries (TXT records). addresses is an array of the text records available for domain (e.g., ['v=spf1 ip4:0.0.0.0 ~all']).

dns.resolve() 相似,但用于文本查询(TXT 记录)。addressesdomain 可用文本记录的数组(比如 ['v=spf1 ip4:0.0.0.0 ~all'])。

dns.resolveSrv(domain, callback)#

The same as dns.resolve(), but only for service records (SRV records). addresses is an array of the SRV records available for domain. Properties of SRV records are priority, weight, port, and name (e.g., [{'priority': 10, {'weight': 5, 'port': 21223, 'name': 'service.example.com'}, ...]).

查询 SRV 记录,与 dns.resolve() 相似。 addresses 是域名 domain 可用的 SRV 记录数组, 每一条记录都包含优先级(priority)、权重(weight)、端口号(port)、服务名称(name)等属性 (比如: [{'priority': 10, {'weight': 5, 'port': 21223, 'name': 'service.example.com'}, ...])。

dns.resolveNs(domain, callback)#

The same as dns.resolve(), but only for name server records (NS records). addresses is an array of the name server records available for domain (e.g., ['ns1.example.com', 'ns2.example.com']).

查询 NS 记录,与 dns.resolve() 相似。 addresses 是域名 domain 可用的 NS 记录数组, (比如: ['ns1.example.com', 'ns2.example.com']).

dns.resolveCname(domain, callback)#

The same as dns.resolve(), but only for canonical name records (CNAME records). addresses is an array of the canonical name records available for domain (e.g., ['bar.example.com']).

查询 CNAME 记录,与 dns.resolve() 相似。 addresses 是域名 domain 可用的 CNAME 记录数组, (比如: ['bar.example.com']).

dns.reverse(ip, callback)#

Reverse resolves an ip address to an array of domain names.

反向解析 IP 地址,返回指向该 IP 地址的域名数组。

The callback has arguments (err, domains).

回调函数接收两个参数: (err, domains).

On error, err is an Error object, where err.code is one of the error codes listed below.

当出错时,err 参数为一个 Error 对象,其中 err.code 为下文所列出的错误代码之一。

dns.getServers()#

Returns an array of IP addresses as strings that are currently being used for resolution

已字符串返回一个当前用于解析的 IP 地址的数组。

dns.setServers(servers)#

Given an array of IP addresses as strings, set them as the servers to use for resolving

指定一个 IP 地址字符串数组,将它们作为解析所用的服务器。

If you specify a port with the address it will be stripped, as the underlying library doesn't support that.

如果您在地址中指定了端口,则端口会被忽略,因为底层库并不支持。

This will throw if you pass invalid input.

如果您传入无效参数,则会抛出异常。

错误代码#

Each DNS query can return one of the following error codes:

每个 DNS 查询都可能返回下列错误代码之一:

  • dns.NODATA: DNS server returned answer with no data.
  • dns.FORMERR: DNS server claims query was misformatted.
  • dns.SERVFAIL: DNS server returned general failure.
  • dns.NOTFOUND: Domain name not found.
  • dns.NOTIMP: DNS server does not implement requested operation.
  • dns.REFUSED: DNS server refused query.
  • dns.BADQUERY: Misformatted DNS query.
  • dns.BADNAME: Misformatted domain name.
  • dns.BADFAMILY: Unsupported address family.
  • dns.BADRESP: Misformatted DNS reply.
  • dns.CONNREFUSED: Could not contact DNS servers.
  • dns.TIMEOUT: Timeout while contacting DNS servers.
  • dns.EOF: End of file.
  • dns.FILE: Error reading file.
  • dns.NOMEM: Out of memory.
  • dns.DESTRUCTION: Channel is being destroyed.
  • dns.BADSTR: Misformatted string.
  • dns.BADFLAGS: Illegal flags specified.
  • dns.NONAME: Given hostname is not numeric.
  • dns.BADHINTS: Illegal hints flags specified.
  • dns.NOTINITIALIZED: c-ares library initialization not yet performed.
  • dns.LOADIPHLPAPI: Error loading iphlpapi.dll.
  • dns.ADDRGETNETWORKPARAMS: Could not find GetNetworkParams function.
  • dns.CANCELLED: DNS query cancelled.

  • dns.NODATA: DNS 服务器返回无数据应答。

  • dns.FORMERR: DNS 声称查询格式错误。
  • dns.SERVFAIL: DNS 服务器返回一般失败。
  • dns.NOTFOUND: 域名未找到。
  • dns.NOTIMP: DNS 服务器未实现所请求操作。
  • dns.REFUSED: DNS 服务器拒绝查询。
  • dns.BADQUERY: DNS 查询格式错误。
  • dns.BADNAME: 域名格式错误。
  • dns.BADFAMILY: 不支持的地址类型。
  • dns.BADRESP: DNS 答复格式错误。
  • dns.CONNREFUSED: 无法联系 DNS 服务器。
  • dns.TIMEOUT: 联系 DNS 服务器超时。
  • dns.EOF: 文件末端。
  • dns.FILE: 读取文件错误。
  • dns.NOMEM: 超出内存。
  • dns.DESTRUCTION: 通道正在被销毁。
  • dns.BADSTR: 字符串格式错误。
  • dns.BADFLAGS: 指定了非法标记。
  • dns.NONAME: 所给主机名非数字。
  • dns.BADHINTS: 指定了非法提示标记。
  • dns.NOTINITIALIZED: c-ares 库初始化尚未进行。
  • dns.LOADIPHLPAPI: 加载 iphlpapi.dll 出错。
  • dns.ADDRGETNETWORKPARAMS: 无法找到 GetNetworkParams 函数。
  • dns.CANCELLED: DNS 查询取消。

    HTTP#

    稳定度: 3 - 稳定

To use the HTTP server and client one must require('http').

要使用HTTP服务或客户端功能,需引用此模块require('http').

The HTTP interfaces in Node are designed to support many features of the protocol which have been traditionally difficult to use. In particular, large, possibly chunk-encoded, messages. The interface is careful to never buffer entire requests or responses--the user is able to stream data.

Node中的HTTP接口的设计支持许多这个协议中原本用起来很困难的特性.特别是对于很大的或者块编码的消息.这些接口很谨慎,它从来不会完全缓存整个请求(request)或响应(response),这样用户可以在请求(request)或响应(response)中使用数据流.

HTTP message headers are represented by an object like this:

HTTP 的消息头(Headers)通过如下对象来表示:

{ 'content-length': '123',
  'content-type': 'text/plain',
  'connection': 'keep-alive',
  'host': 'mysite.com',
  'accept': '*/*' }

Keys are lowercased. Values are not modified.

其中键为小写字母,值是不能修改的。

In order to support the full spectrum of possible HTTP applications, Node's HTTP API is very low-level. It deals with stream handling and message parsing only. It parses a message into headers and body but it does not parse the actual headers or the body.

为了能更加全面地支持HTTP应用,Node的HTTP API是很接近底层,它是可以处理数据流还有只转化消息。它把一个消息写到报文头和报文体,但是它并没有解析报文头或报文体。

Defined headers that allow multiple values are concatenated with a , character, except for the set-cookie and cookie headers which are represented as an array of values. Headers such as content-length which can only have a single value are parsed accordingly, and only a single value is represented on the parsed object.

定义好的消息头允许多个值以,分割, 除了set-cookiecookie,因为他们表示值的数组. 消息头,比如 content-length只能有单个值, 并且单个值表示解析好的对象.

The raw headers as they were received are retained in the rawHeaders property, which is an array of [key, value, key2, value2, ...]. For example, the previous message header object might have a rawHeaders list like the following:

接收到的原始头信息以数组形式 [key, value, key2, value2, ...] 保存在 rawHeaders 属性中. 例如, 前面提到的消息对象会有 rawHeaders 列表如下:

[ 'ConTent-Length', '123456',
  'content-LENGTH', '123',
  'content-type', 'text/plain',
  'CONNECTION', 'keep-alive',
  'Host', 'mysite.com',
  'accepT', '*/*' ]

http.STATUS_CODES#

  • Object

  • Object

A collection of all the standard HTTP response status codes, and the short description of each. For example, http.STATUS_CODES[404] === 'Not Found'.

所以标准HTTP响应码的集合,以及每个响应码的简短描述.例如:http.STATUS_CODES[404]==='Not Found'.

http.createServer([requestListener])#

Returns a new web server object.

返回一个新的web服务器对象

The requestListener is a function which is automatically added to the 'request' event.

参数 requestListener 是一个函数,它将会自动加入到 'request' 事件的监听队列.

http.createClient([port], [host])#

This function is deprecated; please use http.request() instead. Constructs a new HTTP client. port and host refer to the server to be connected to.

该函数已弃用,请用http.request()代替. 创建一个新的HTTP客户端. porthost 表示所连接的服务器.

Class: http.Server#

This is an EventEmitter with the following events:

这是一个包含下列事件的EventEmitter:

Event: 'request'#

function (request, response) { }

function (request, response) { }

Emitted each time there is a request. Note that there may be multiple requests per connection (in the case of keep-alive connections). request is an instance of http.IncomingMessage and response is an instance of http.ServerResponse

每次收到一个请求时触发.注意每个连接又可能有多个请求(在keep-alive的连接中).requesthttp.IncomingMessage的一个实例.responsehttp.ServerResponse的一个实例

事件: 'connection'#

function (socket) { }

function (socket) { }

When a new TCP stream is established. socket is an object of type net.Socket. Usually users will not want to access this event. In particular, the socket will not emit readable events because of how the protocol parser attaches to the socket. The socket can also be accessed at request.connection.

新的TCP流建立时出发。 socket是一个net.Socket对象。 通常用户无需处理该事件。 特别注意,协议解析器绑定套接字时采用的方式使套接字不会出发readable事件。 还可以通过request.connection访问socket

事件: 'close'#

function () { }

function () { }

Emitted when the server closes.

当此服务器关闭时触发

Event: 'checkContinue'#

function (request, response) { }

function (request, response) { }

Emitted each time a request with an http Expect: 100-continue is received. If this event isn't listened for, the server will automatically respond with a 100 Continue as appropriate.

每当收到Expect: 100-continue的http请求时触发。 如果未监听该事件,服务器会酌情自动发送100 Continue响应。

Handling this event involves calling response.writeContinue if the client should continue to send the request body, or generating an appropriate HTTP response (e.g., 400 Bad Request) if the client should not continue to send the request body.

处理该事件时,如果客户端可以继续发送请求主体则调用response.writeContinue, 如果不能则生成合适的HTTP响应(例如,400 请求无效)。

Note that when this event is emitted and handled, the request event will not be emitted.

需要注意到, 当这个事件触发并且被处理后, request 时间将不再会触发.

事件: 'connect'#

function (request, socket, head) { }

function (request, socket, head) { }

Emitted each time a client requests a http CONNECT method. If this event isn't listened for, then clients requesting a CONNECT method will have their connections closed.

每当客户端发起CONNECT请求时出发。如果未监听该事件,客户端发起CONNECT请求时连接会被关闭。

  • request is the arguments for the http request, as it is in the request event.
  • socket is the network socket between the server and client.
  • head is an instance of Buffer, the first packet of the tunneling stream, this may be empty.

  • request 是该HTTP请求的参数,与request事件中的相同。

  • socket 是服务端与客户端之间的网络套接字。
  • head 是一个Buffer实例,隧道流的第一个包,该参数可能为空。

After this event is emitted, the request's socket will not have a data event listener, meaning you will need to bind to it in order to handle data sent to the server on that socket.

在这个事件被分发后,请求的套接字将不会有data事件监听器,也就是说你将需要绑定一个监听器到data事件,来处理在套接字上被发送到服务器的数据。

Event: 'upgrade'#

function (request, socket, head) { }

function (request, socket, head) { }

Emitted each time a client requests a http upgrade. If this event isn't listened for, then clients requesting an upgrade will have their connections closed.

每当一个客户端请求http升级时,该事件被分发。如果这个事件没有被监听,那么这些请求升级的客户端的连接将会被关闭。

  • request is the arguments for the http request, as it is in the request event.
  • socket is the network socket between the server and client.
  • head is an instance of Buffer, the first packet of the upgraded stream, this may be empty.

  • request 是该HTTP请求的参数,与request事件中的相同。

  • socket 是服务端与客户端之间的网络套接字。
  • head 是一个Buffer实例,升级后流的第一个包,该参数可能为空。

After this event is emitted, the request's socket will not have a data event listener, meaning you will need to bind to it in order to handle data sent to the server on that socket.

在这个事件被分发后,请求的套接字将不会有data事件监听器,也就是说你将需要绑定一个监听器到data事件,来处理在套接字上被发送到服务器的数据。

Event: 'clientError'#

function (exception, socket) { }

function (exception, socket) { }

If a client connection emits an 'error' event - it will forwarded here.

如果一个客户端连接触发了一个 'error' 事件, 它就会转发到这里.

socket is the net.Socket object that the error originated from.

socket 是导致错误的 net.Socket 对象。

server.listen(port, [hostname], [backlog], [callback])#

Begin accepting connections on the specified port and hostname. If the hostname is omitted, the server will accept connections directed to any IPv4 address (INADDR_ANY).

开始在指定的主机名和端口接收连接。如果省略主机名,服务器会接收指向任意IPv4地址的链接(INADDR_ANY)。

To listen to a unix socket, supply a filename instead of port and hostname.

监听一个 unix socket, 需要提供一个文件名而不是端口号和主机名。

Backlog is the maximum length of the queue of pending connections. The actual length will be determined by your OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on linux. The default value of this parameter is 511 (not 512).

积压量 backlog 为连接等待队列的最大长度。实际长度由您的操作系统通过 sysctl 设置决定,比如 Linux 上的 tcp_max_syn_backlogsomaxconn。该参数缺省值为 511(不是 512)。

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen(port).

这个函数是异步的。最后一个参数callback会被作为事件监听器添加到 'listening'事件。另见net.Server.listen(port)

server.listen(path, [callback])#

Start a UNIX socket server listening for connections on the given path.

启动一个 UNIX 套接字服务器在所给路径 path 上监听连接。

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen(path).

该函数是异步的.最后一个参数callback将会加入到[listening][]事件的监听队列中.又见net.Server.listen(path).

server.listen(handle, [callback])#

  • handle Object
  • callback Function

  • handle处理器 Object

  • callback回调函数 Function

The handle object can be set to either a server or socket (anything with an underlying _handle member), or a {fd: <n>} object.

handle 变量可以被设置为server 或者 socket(任一以下划线开头的成员 _handle), 或者一个 {fd: <n>} 对象

This will cause the server to accept connections on the specified handle, but it is presumed that the file descriptor or handle has already been bound to a port or domain socket.

这将使服务器用指定的句柄接受连接,但它假设文件描述符或者句柄已经被绑定在特定的端口或者域名套接字。

Listening on a file descriptor is not supported on Windows.

Windows 不支持监听一个文件描述符。

This function is asynchronous. The last parameter callback will be added as a listener for the 'listening' event. See also net.Server.listen().

这个函数是异步的。最后一个参数callback会被作为事件监听器添加到'listening'事件。另见net.Server.listen()

server.close([callback])#

Stops the server from accepting new connections. See net.Server.close().

禁止服务端接收新的连接. 查看 net.Server.close().

server.maxHeadersCount#

Limits maximum incoming headers count, equal to 1000 by default. If set to 0 - no limit will be applied.

最大请求头数目限制, 默认 1000 个. 如果设置为0, 则代表不做任何限制.

server.setTimeout(msecs, callback)#

  • msecs Number
  • callback Function

  • msecs Number

  • callback Function

Sets the timeout value for sockets, and emits a 'timeout' event on the Server object, passing the socket as an argument, if a timeout occurs.

为套接字设定超时值。如果一个超时发生,那么Server对象上会分发一个'timeout'事件,同时将套接字作为参数传递。

If there is a 'timeout' event listener on the Server object, then it will be called with the timed-out socket as an argument.

如果在Server对象上有一个'timeout'事件监听器,那么它将被调用,而超时的套接字会作为参数传递给这个监听器。

By default, the Server's timeout value is 2 minutes, and sockets are destroyed automatically if they time out. However, if you assign a callback to the Server's 'timeout' event, then you are responsible for handling socket timeouts.

默认情况下,服务器的超时时间是2分钟,超时后套接字会自动销毁。 但是如果为‘timeout’事件指定了回调函数,你需要负责处理套接字超时。

server.timeout#

  • Number Default = 120000 (2 minutes)

  • Number 默认 120000 (2 分钟)

The number of milliseconds of inactivity before a socket is presumed to have timed out.

一个套接字被判断为超时之前的闲置毫秒数。

Note that the socket timeout logic is set up on connection, so changing this value only affects new connections to the server, not any existing connections.

注意套接字的超时逻辑在连接时被设定,所以更改这个值只会影响新创建的连接,而不会影响到现有连接。

Set to 0 to disable any kind of automatic timeout behavior on incoming connections.

设置为0将阻止之后建立的连接的一切自动超时行为。

Class: http.ServerResponse#

This object is created internally by a HTTP server--not by the user. It is passed as the second parameter to the 'request' event.

The response implements the Writable Stream interface. This is an EventEmitter with the following events:

事件: 'close'#

function () { }

function () { }

Indicates that the underlying connection was terminated before response.end() was called or able to flush.

response.writeContinue()#

Sends a HTTP/1.1 100 Continue message to the client, indicating that the request body should be sent. See the 'checkContinue' event on Server.

response.writeHead(statusCode, [reasonPhrase], [headers])#

Sends a response header to the request. The status code is a 3-digit HTTP status code, like 404. The last argument, headers, are the response headers. Optionally one can give a human-readable reasonPhrase as the second argument.

向请求回复响应头. statusCode是一个三位是的HTTP状态码, 例如 404. 最后一个参数, headers, 是响应头的内容. 可以选择性的,把人类可读的‘原因短句’作为第二个参数。

Example:

示例:

var body = 'hello world';
response.writeHead(200, {
  'Content-Length': body.length,
  'Content-Type': 'text/plain' });

This method must only be called once on a message and it must be called before response.end() is called.

这个方法只能在消息到来后使用一次 而且这必须在response.end() 之后调用。

If you call response.write() or response.end() before calling this, the implicit/mutable headers will be calculated and call this function for you.

如果你在调用这之前调用了response.write()或者 response.end() , 就会调用这个函数,并且 不明/容易混淆 的头将会被使用。

Note: that Content-Length is given in bytes not characters. The above example works because the string 'hello world' contains only single byte characters. If the body contains higher coded characters then Buffer.byteLength() should be used to determine the number of bytes in a given encoding. And Node does not check whether Content-Length and the length of the body which has been transmitted are equal or not.

注意:Content-Length 是以字节(byte)计,而不是以字符(character)计。之前的例子奏效的原因是字符串'hello world'只包含了单字节的字符。如果body包含了多字节编码的字符,就应当使用Buffer.byteLength()来确定在多字节字符编码情况下字符串的字节数。需要进一步说明的是Node不检查Content-Lenth属性和已传输的body长度是否吻合。

response.setTimeout(msecs, callback)#

  • msecs Number
  • callback Function

  • msecs Number

  • callback Function

Sets the Socket's timeout value to msecs. If a callback is provided, then it is added as a listener on the 'timeout' event on the response object.

设定套接字的超时时间为msecs。如果提供了回调函数,会将其添加为响应对象的'timeout'事件的监听器。

If no 'timeout' listener is added to the request, the response, or the server, then sockets are destroyed when they time out. If you assign a handler on the request, the response, or the server's 'timeout' events, then it is your responsibility to handle timed out sockets.

如果请求、响应、服务器均未添加'timeout'事件监听,套接字将在超时时被销毁。 如果监听了请求、响应、服务器之一的'timeout'事件,需要自行处理超时的套接字。

response.statusCode#

When using implicit headers (not calling response.writeHead() explicitly), this property controls the status code that will be sent to the client when the headers get flushed.

Example:

示例:

response.statusCode = 404;

After response header was sent to the client, this property indicates the status code which was sent out.

response.setHeader(name, value)#

Sets a single header value for implicit headers. If this header already exists in the to-be-sent headers, its value will be replaced. Use an array of strings here if you need to send multiple headers with the same name.

为默认或者已存在的头设置一条单独的头内容。如果这个头已经存在于 将被送出的头中,将会覆盖原来的内容。如果我想设置更多的头, 就使用一个相同名字的字符串数组

Example:

示例:

response.setHeader("Content-Type", "text/html");

or

或者

response.setHeader("Set-Cookie", ["type=ninja", "language=javascript"]);

response.headersSent#

Boolean (read-only). True if headers were sent, false otherwise.

Boolean值(只读).如果headers发送完毕,则为true,反之为false

response.sendDate#

When true, the Date header will be automatically generated and sent in the response if it is not already present in the headers. Defaults to true.

若为true,则当headers里没有Date值时自动生成Date并发送.默认值为true

This should only be disabled for testing; HTTP requires the Date header in responses.

只有在测试环境才禁用它; 因为 HTTP 要求响应包含 Date 头.

response.getHeader(name)#

Reads out a header that's already been queued but not sent to the client. Note that the name is case insensitive. This can only be called before headers get implicitly flushed.

Example:

示例:

var contentType = response.getHeader('content-type');

response.removeHeader(name)#

Removes a header that's queued for implicit sending.

Example:

示例:

response.removeHeader("Content-Encoding");

response.write(chunk, [encoding])#

If this method is called and response.writeHead() has not been called, it will switch to implicit header mode and flush the implicit headers.

This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.

chunk can be a string or a buffer. If chunk is a string, the second parameter specifies how to encode it into a byte stream. By default the encoding is 'utf8'.

Note: This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.

The first time response.write() is called, it will send the buffered header information and the first body to the client. The second time response.write() is called, Node assumes you're going to be streaming data, and sends that separately. That is, the response is buffered up to the first chunk of body.

Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory. 'drain' will be emitted when the buffer is again free.

如果所有数据被成功刷新到内核缓冲区,则返回true。如果所有或部分数据在用户内存里还处于队列中,则返回false。当缓冲区再次被释放时,'drain'事件会被分发。 'drain' will be emitted when the buffer is again free.

response.addTrailers(headers)#

This method adds HTTP trailing headers (a header but at the end of the message) to the response.

Trailers will only be emitted if chunked encoding is used for the response; if it is not (e.g., if the request was HTTP/1.0), they will be silently discarded.

Note that HTTP requires the Trailer header to be sent if you intend to emit trailers, with a list of the header fields in its value. E.g.,

response.writeHead(200, { 'Content-Type': 'text/plain',
                          'Trailer': 'Content-MD5' });
response.write(fileData);
response.addTrailers({'Content-MD5': "7895bf4b8828b55ceaf47747b4bca667"});
response.end();

response.end([data], [encoding])#

This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.

If data is specified, it is equivalent to calling response.write(data, encoding) followed by response.end().

http.request(options, callback)#

Node maintains several connections per server to make HTTP requests. This function allows one to transparently issue requests.

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

options 可以是一个对象或一个字符串。如果 options是一个字符串, 它将自动的使用url.parse()解析。

Options:

Options:

  • host: A domain name or IP address of the server to issue the request to. Defaults to 'localhost'.
  • hostname: To support url.parse() hostname is preferred over host
  • port: Port of remote server. Defaults to 80.
  • localAddress: Local interface to bind for network connections.
  • socketPath: Unix Domain Socket (use one of host:port or socketPath)
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'. An exception is thrown when the request path contains illegal characters. Currently, only spaces are rejected but that may change in the future.
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:
    • undefined (default): use global Agent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.
  • keepAlive: {Boolean} Keep sockets around in a pool to be used by other requests in the future. Default = false
  • keepAliveMsecs: {Integer} When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive. Default = 1000. Only relevant if keepAlive is set to true.

  • host: 要发送请求的服务端域名或IP地址。 默认为'localhost'

  • hostname: 要支持url.parse()的话,优先使用hostname而不是host
  • port: 远程服务器的端口。默认为80。
  • localAddress: 本地接口,用来绑定网络连接。
  • socketPath: Unix Domain Socket (use one of host:port or socketPath)
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'. An exception is thrown when the request path contains illegal characters. Currently, only spaces are rejected but that may change in the future.
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:
    • undefined (default): use global Agent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.
  • keepAlive: {Boolean} Keep sockets around in a pool to be used by other requests in the future. Default = false
  • keepAliveMsecs: {Integer} When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive. Default = 1000. Only relevant if keepAlive is set to true.

http.request() returns an instance of the http.ClientRequest class. The ClientRequest instance is a writable stream. If one needs to upload a file with a POST request, then write to the ClientRequest object.

http.request() 返回一个 http.ClientRequest类的实例。ClientRequest实例是一个可写流对象。如果需要用POST请求上传一个文件的话,就将其写入到ClientRequest对象。

Example:

示例:

// write data to request body
req.write('data\n');
req.write('data\n');
req.end();

Note that in the example req.end() was called. With http.request() one must always call req.end() to signify that you're done with the request - even if there is no data being written to the request body.

注意,例子里的req.end()被调用了。使用http.request()方法时都必须总是调用req.end()以表明这个请求已经完成,即使响应body里没有任何数据。

If any error is encountered during the request (be that with DNS resolution, TCP level errors, or actual HTTP parse errors) an 'error' event is emitted on the returned request object.

There are a few special headers that should be noted.

  • Sending a 'Connection: keep-alive' will notify Node that the connection to the server should be persisted until the next request.

  • Sending a 'Content-length' header will disable the default chunked encoding.

  • 发送 'Content-length' 头将会禁用默认的 chunked 编码.

  • Sending an 'Expect' header will immediately send the request headers. Usually, when sending 'Expect: 100-continue', you should both set a timeout and listen for the continue event. See RFC2616 Section 8.2.3 for more information.

  • Sending an Authorization header will override using the auth option to compute basic authentication.

http.get(options, callback)#

Since most requests are GET requests without bodies, Node provides this convenience method. The only difference between this method and http.request() is that it sets the method to GET and calls req.end() automatically.

Example:

示例:

http.get("http://www.google.com/index.html", function(res) {
  console.log("Got response: " + res.statusCode);
}).on('error', function(e) {
  console.log("Got error: " + e.message);
});

Class: http.Agent#

The HTTP Agent is used for pooling sockets used in HTTP client requests.

The HTTP Agent also defaults client requests to using Connection:keep-alive. If no pending HTTP requests are waiting on a socket to become free the socket is closed. This means that Node's pool has the benefit of keep-alive when under load but still does not require developers to manually close the HTTP clients using KeepAlive.

If you opt into using HTTP KeepAlive, you can create an Agent object with that flag set to true. (See the constructor options below.) Then, the Agent will keep unused sockets in a pool for later use. They will be explicitly marked so as to not keep the Node process running. However, it is still a good idea to explicitly destroy() KeepAlive agents when they are no longer in use, so that the Sockets will be shut down.

Sockets are removed from the agent's pool when the socket emits either a "close" event or a special "agentRemove" event. This means that if you intend to keep one HTTP request open for a long time and don't want it to stay in the pool you can do something along the lines of:

http.get(options, function(res) {
  // Do stuff
}).on("socket", function (socket) {
  socket.emit("agentRemove");
});

Alternatively, you could just opt out of pooling entirely using agent:false:

http.get({
  hostname: 'localhost',
  port: 80,
  path: '/',
  agent: false  // create a new agent just for this one request
}, function (res) {
  // Do stuff with response
})

new Agent([options])#

  • options Object Set of configurable options to set on the agent. Can have the following fields:
    • keepAlive Boolean Keep sockets around in a pool to be used by other requests in the future. Default = false
    • keepAliveMsecs Integer When using HTTP KeepAlive, how often to send TCP KeepAlive packets over sockets being kept alive. Default = 1000. Only relevant if keepAlive is set to true.
    • maxSockets Number Maximum number of sockets to allow per host. Default = Infinity.
    • maxFreeSockets Number Maximum number of sockets to leave open in a free state. Only relevant if keepAlive is set to true. Default = 256.

The default http.globalAgent that is used by http.request has all of these values set to their respective defaults.

To configure any of them, you must create your own Agent object.

要配置这些值,你必须创建一个你自己的Agent对象。

var http = require('http');
var keepAliveAgent = new http.Agent({ keepAlive: true });
keepAliveAgent.request(options, onResponseCallback);

agent.maxSockets#

By default set to Infinity. Determines how many concurrent sockets the agent can have open per host.

agent.maxFreeSockets#

By default set to 256. For Agents supporting HTTP KeepAlive, this sets the maximum number of sockets that will be left open in the free state.

agent.sockets#

An object which contains arrays of sockets currently in use by the Agent. Do not modify.

agent.freeSockets#

An object which contains arrays of sockets currently awaiting use by the Agent when HTTP KeepAlive is used. Do not modify.

agent.requests#

An object which contains queues of requests that have not yet been assigned to sockets. Do not modify.

agent.destroy()#

Destroy any sockets that are currently in use by the agent.

销毁被此 agent 正在使用着的所有 sockets.

It is usually not necessary to do this. However, if you are using an agent with KeepAlive enabled, then it is best to explicitly shut down the agent when you know that it will no longer be used. Otherwise, sockets may hang open for quite a long time before the server terminates them.

agent.getName(options)#

Get a unique name for a set of request options, to determine whether a connection can be reused. In the http agent, this returns host:port:localAddress. In the https agent, the name includes the CA, cert, ciphers, and other HTTPS/TLS-specific options that determine socket reusability.

http.globalAgent#

Global instance of Agent which is used as the default for all http client requests.

Class: http.ClientRequest#

This object is created internally and returned from http.request(). It represents an in-progress request whose header has already been queued. The header is still mutable using the setHeader(name, value), getHeader(name), removeHeader(name) API. The actual header will be sent along with the first data chunk or when closing the connection.

To get the response, add a listener for 'response' to the request object. 'response' will be emitted from the request object when the response headers have been received. The 'response' event is executed with one argument which is an instance of http.IncomingMessage.

During the 'response' event, one can add listeners to the response object; particularly to listen for the 'data' event.

If no 'response' handler is added, then the response will be entirely discarded. However, if you add a 'response' event handler, then you must consume the data from the response object, either by calling response.read() whenever there is a 'readable' event, or by adding a 'data' handler, or by calling the .resume() method. Until the data is consumed, the 'end' event will not fire. Also, until the data is read it will consume memory that can eventually lead to a 'process out of memory' error.

Note: Node does not check whether Content-Length and the length of the body which has been transmitted are equal or not.

The request implements the Writable Stream interface. This is an EventEmitter with the following events:

Event 'response'#

function (response) { }

function (response) { }

Emitted when a response is received to this request. This event is emitted only once. The response argument will be an instance of http.IncomingMessage.

Options:

Options:

  • host: A domain name or IP address of the server to issue the request to.
  • port: Port of remote server.
  • socketPath: Unix Domain Socket (use one of host:port or socketPath)

  • host: 请求要发送的域名或服务器的IP地址。

  • port: 远程服务器的端口。
  • socketPath: Unix Domain Socket (使用host:port或socketPath)

Event: 'socket'#

function (socket) { }

function (socket) { }

Emitted after a socket is assigned to this request.

当一个套接字被分配到这个请求之后,该事件被分发。

事件: 'connect'#

function (response, socket, head) { }

function (response, socket, head) { }

Emitted each time a server responds to a request with a CONNECT method. If this event isn't being listened for, clients receiving a CONNECT method will have their connections closed.

A client server pair that show you how to listen for the connect event.

    // make a request over an HTTP tunnel
    socket.write('GET / HTTP/1.1\r\n' +
                 'Host: www.google.com:80\r\n' +
                 'Connection: close\r\n' +
                 '\r\n');
    socket.on('data', function(chunk) {
      console.log(chunk.toString());
    });
    socket.on('end', function() {
      proxy.close();
    });
  });
});

Event: 'upgrade'#

function (response, socket, head) { }

function (response, socket, head) { }

Emitted each time a server responds to a request with an upgrade. If this event isn't being listened for, clients receiving an upgrade header will have their connections closed.

A client server pair that show you how to listen for the upgrade event.

  req.on('upgrade', function(res, socket, upgradeHead) {
    console.log('got upgraded!');
    socket.end();
    process.exit(0);
  });
});

Event: 'continue'#

function () { }

function () { }

Emitted when the server sends a '100 Continue' HTTP response, usually because the request contained 'Expect: 100-continue'. This is an instruction that the client should send the request body.

request.write(chunk, [encoding])#

Sends a chunk of the body. By calling this method many times, the user can stream a request body to a server--in that case it is suggested to use the ['Transfer-Encoding', 'chunked'] header line when creating the request.

The chunk argument should be a Buffer or a string.

chunk 参数必须是 Buffer 或者 string.

The encoding argument is optional and only applies when chunk is a string. Defaults to 'utf8'.

encoding 参数是可选的, 并且只能在 chunk 是 string 类型的时候才能设置. 默认是 'utf8'.

request.end([data], [encoding])#

Finishes sending the request. If any parts of the body are unsent, it will flush them to the stream. If the request is chunked, this will send the terminating '0\r\n\r\n'.

If data is specified, it is equivalent to calling request.write(data, encoding) followed by request.end().

request.abort()#

Aborts a request. (New since v0.3.8.)

终止一个请求. (从 v0.3.8 开始新加.)

request.setTimeout(timeout, [callback])#

Once a socket is assigned to this request and is connected socket.setTimeout() will be called.

request.setNoDelay([noDelay])#

Once a socket is assigned to this request and is connected socket.setNoDelay() will be called.

request.setSocketKeepAlive([enable], [initialDelay])#

Once a socket is assigned to this request and is connected socket.setKeepAlive() will be called.

一旦一个套接字被分配到这个请求,而且成功连接,那么socket.setKeepAlive()就会被调用。

http.IncomingMessage#

An IncomingMessage object is created by http.Server or http.ClientRequest and passed as the first argument to the 'request' and 'response' event respectively. It may be used to access response status, headers and data.

一个 IncomingMessage对象是由 http.Serverhttp.ClientRequest创建的,并作为第一参数分别传递给'request''response' 事件。它也可以被用来访问应答的状态,头文件和数据。

It implements the Readable Stream interface, as well as the following additional events, methods, and properties.

这个实现了 [可读流][]接口以及以下增加的事件,函数和属性。

事件: 'close'#

function () { }

function () { }

Indicates that the underlaying connection was terminated before response.end() was called or able to flush.

表示在response.end()被调用或强制刷新之前,底层的连接已经被终止了。

Just like 'end', this event occurs only once per response. See [http.ServerResponse][]'s 'close' event for more information.

'end'一样,这个事件对于每个应答只会触发一次。详见[http.ServerResponse][]的 'close'事件。

message.httpVersion#

In case of server request, the HTTP version sent by the client. In the case of client response, the HTTP version of the connected-to server. Probably either '1.1' or '1.0'.

客户端向服务器发出请求时,客户端发送的HTTP版本;或是服务器向客户端返回应答时,服务器的HTTP版本。通常是 '1.1''1.0'

Also response.httpVersionMajor is the first integer and response.httpVersionMinor is the second.

message.headers#

The request/response headers object.

请求/响应 头对象.

Read only map of header names and values. Header names are lower-cased. Example:

只读的头文件名称和值的映射。头文件名称全小写。示例:

// 输出类似这样:
//
// { 'user-agent': 'curl/7.22.0',
//   host: '127.0.0.1:8000',
//   accept: '*/*' }
console.log(request.headers);

message.rawHeaders#

The raw request/response headers list exactly as they were received.

Note that the keys and values are in the same list. It is not a list of tuples. So, the even-numbered offsets are key values, and the odd-numbered offsets are the associated values.

Header names are not lowercased, and duplicates are not merged.

// Prints something like:
//
// [ 'user-agent',
//   'this is invalid because there can be only one',
//   'User-Agent',
//   'curl/7.22.0',
//   'Host',
//   '127.0.0.1:8000',
//   'ACCEPT',
//   '*/*' ]
console.log(request.rawHeaders);

message.trailers#

The request/response trailers object. Only populated at the 'end' event.

message.rawTrailers#

The raw request/response trailer keys and values exactly as they were received. Only populated at the 'end' event.

message.setTimeout(msecs, callback)#

  • msecs Number
  • callback Function

  • msecs Number

  • callback Function

Calls message.connection.setTimeout(msecs, callback).

调用message.connection.setTimeout(msecs, callback)

message.method#

Only valid for request obtained from http.Server.

仅对从http.Server获得到的请求(request)有效.

The request method as a string. Read only. Example: 'GET', 'DELETE'.

请求(request)方法如同一个只读的字符串,比如‘GET’、‘DELETE’。

message.url#

Only valid for request obtained from http.Server.

仅对从http.Server获得到的请求(request)有效.

Request URL string. This contains only the URL that is present in the actual HTTP request. If the request is:

请求的URL字符串.它仅包含实际HTTP请求中所提供的URL.加入请求如下:

GET /status?name=ryan HTTP/1.1\r\n
Accept: text/plain\r\n
\r\n

Then request.url will be:

request.url 为:

'/status?name=ryan'

If you would like to parse the URL into its parts, you can use require('url').parse(request.url). Example:

如果你想要将URL分解出来,你可以用require('url').parse(request.url). 例如:

node> require('url').parse('/status?name=ryan')
{ href: '/status?name=ryan',
  search: '?name=ryan',
  query: 'name=ryan',
  pathname: '/status' }

If you would like to extract the params from the query string, you can use the require('querystring').parse function, or pass true as the second argument to require('url').parse. Example:

如果你想要提取出从请求字符串(query string)中的参数,你可以用require('querystring').parse函数, 或者将true作为第二个参数传递给require('url').parse. 例如:

node> require('url').parse('/status?name=ryan', true)
{ href: '/status?name=ryan',
  search: '?name=ryan',
  query: { name: 'ryan' },
  pathname: '/status' }

message.statusCode#

Only valid for response obtained from http.ClientRequest.

仅对从http.ClientRequest获得的响应(response)有效.

The 3-digit HTTP response status code. E.G. 404.

三位数的HTTP响应状态码. 例如 404.

message.socket#

The net.Socket object associated with the connection.

与此连接(connection)关联的net.Socket对象.

With HTTPS support, use request.connection.verifyPeer() and request.connection.getPeerCertificate() to obtain the client's authentication details.

通过https的支持,使用 request.connection.verifyPeer()方法和request.connection.getPeerCertificate()方法来得到客户端的身份信息。

HTTPS#

稳定度: 3 - 稳定

HTTPS is the HTTP protocol over TLS/SSL. In Node this is implemented as a separate module.

HTTPS 是建立在 TLS/SSL 之上的 HTTP 协议。在 Node 中被实现为单独的模块。

类: https.Server#

This class is a subclass of tls.Server and emits events same as http.Server. See http.Server for more information.

该类是 tls.Server 的子类,并且发生和 http.Server 一样的事件。更多信息详见 http.Server

server.setTimeout(msecs, callback)#

See http.Server#setTimeout().

详见 http.Server#setTimeout()

server.timeout#

See http.Server#timeout.

详见 http.Server#timeout

https.createServer(options, [requestListener])#

Returns a new HTTPS web server object. The options is similar to tls.createServer(). The requestListener is a function which is automatically added to the 'request' event.

返回一个新的 HTTPS Web 服务器对象。其中 options 类似于 tls.createServer()requestListener 是一个会被自动添加到 request 事件的函数。

Example:

示例:

https.createServer(options, function (req, res) {
  res.writeHead(200);
  res.end("hello world\n");
}).listen(8000);

Or

或者

https.createServer(options, function (req, res) {
  res.writeHead(200);
  res.end("hello world\n");
}).listen(8000);

server.listen(port, [host], [backlog], [callback])#

server.listen(path, [callback])#

server.listen(handle, [callback])#

server.listen(port, [host], [backlog], [callback])#

server.listen(path, [callback])#

server.listen(handle, [callback])#

See http.listen() for details.

详见 http.listen()

server.close([callback])#

See http.close() for details.

详见 http.close()

https.request(options, callback)#

Makes a request to a secure web server.

向一个安全 Web 服务器发送请求。

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

options 可以是一个对象或字符串。如果 options 是字符串,它会自动被 url.parse() 解析。

All options from http.request() are valid.

所有来自 http.request() 的选项都是经过验证的。

Example:

示例:

req.on('error', function(e) {
  console.error(e);
});

The options argument has the following options

options 参数有如下选项

  • host: A domain name or IP address of the server to issue the request to. Defaults to 'localhost'.
  • hostname: To support url.parse() hostname is preferred over host
  • port: Port of remote server. Defaults to 443.
  • method: A string specifying the HTTP request method. Defaults to 'GET'.
  • path: Request path. Defaults to '/'. Should include query string if any. E.G. '/index.html?page=12'
  • headers: An object containing request headers.
  • auth: Basic authentication i.e. 'user:password' to compute an Authorization header.
  • agent: Controls Agent behavior. When an Agent is used request will default to Connection: keep-alive. Possible values:

    • undefined (default): use globalAgent for this host and port.
    • Agent object: explicitly use the passed in Agent.
    • false: opts out of connection pooling with an Agent, defaults request to Connection: close.
  • host:发送请求的服务器的域名或 IP 地址,缺省为 'localhost'

  • hostname:为了支持 url.parse()hostname 优先于 host
  • port:远程服务器的端口,缺省为 443。
  • method:指定 HTTP 请求方法的字符串,缺省为 `'GET'。
  • path:请求路径,缺省为 '/'。如有查询字串则应包含,比如 '/index.html?page=12'
  • headers:包含请求头的对象。
  • auth:基本认证,如 'user:password' 来计算 Authorization 头。
  • agent:控制 Agent 行为。当使用 Agent 时请求会缺省为 Connection: keep-alive。可选值有:
    • undefined(缺省):为该主机和端口使用 globalAgent
    • Agent 对象:明确使用传入的 Agent
    • false:不使用 Agent 连接池,缺省请求 Connection: close

The following options from tls.connect() can also be specified. However, a globalAgent silently ignores these.

下列来自 tls.connect() 的选项也能够被指定,但一个 globalAgent 会忽略它们。

  • pfx: Certificate, Private key and CA certificates to use for SSL. Default null.
  • key: Private key to use for SSL. Default null.
  • passphrase: A string of passphrase for the private key or pfx. Default null.
  • cert: Public x509 certificate to use. Default null.
  • ca: An authority certificate or array of authority certificates to check the remote host against.
  • ciphers: A string describing the ciphers to use or exclude. Consult http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT for details on the format.
  • rejectUnauthorized: If true, the server certificate is verified against the list of supplied CAs. An 'error' event is emitted if verification fails. Verification happens at the connection level, before the HTTP request is sent. Default true.
  • secureProtocol: The SSL method to use, e.g. SSLv3_method to force SSL version 3. The possible values depend on your installation of OpenSSL and are defined in the constant SSL_METHODS.

  • pfx:证书,SSL 所用的私钥或 CA 证书。缺省为 null

  • key:SSL 所用私钥。缺省为 null
  • passphrase:私钥或 pfx 的口令字符串,缺省为 null
  • cert:所用公有 x509 证书,缺省为 null
  • ca:用于检查远程主机的证书颁发机构或包含一系列证书颁发机构的数组。
  • ciphers:描述要使用或排除的密码的字符串,格式请参阅 http://www.openssl.org/docs/apps/ciphers.html#CIPHER_LIST_FORMAT
  • rejectUnauthorized:如为 true 则服务器证书会使用所给 CA 列表验证。如果验证失败则会触发 'error' 时间。验证过程发生于连接层,在 HTTP 请求发送之前。缺省为 true
  • secureProtocol:所用 SSL 方法,比如 SSLv3_method 强制使用 SSL version 3。可取值取决于您安装的 OpenSSL 并被定义在 SSL_METHODS 常量。

In order to specify these options, use a custom Agent.

要指定这些选项,使用一个自定义 Agent

Example:

示例:

var req = https.request(options, function(res) {
  ...
}

Or does not use an Agent.

或不使用 Agent

Example:

示例:

var req = https.request(options, function(res) {
  ...
}

https.get(options, callback)#

Like http.get() but for HTTPS.

类似 http.get() 但为 HTTPS。

options can be an object or a string. If options is a string, it is automatically parsed with url.parse().

options 可以是一个对象或字符串。如果 options 是字符串,它会自动被 url.parse() 解析。

Example:

示例:

}).on('error', function(e) {
  console.error(e);
});

类: https.Agent#

An Agent object for HTTPS similar to http.Agent. See https.request() for more information.

类似于 http.Agent 的 HTTPS Agent 对象。详见 https.request()

https.globalAgent#

Global instance of https.Agent for all HTTPS client requests.

所有 HTTPS 客户端请求的全局 https.Agent 实例。

URL#

稳定度: 3 - 稳定

This module has utilities for URL resolution and parsing. Call require('url') to use it.

该模块包含用以 URL 解析的实用函数。 使用 require('url') 来调用该模块。

Parsed URL objects have some or all of the following fields, depending on whether or not they exist in the URL string. Any parts that are not in the URL string will not be in the parsed object. Examples are shown for the URL

不同的 URL 字符串解析后返回的对象会有一些额外的字段信息,仅当该部分出现在 URL 中才会有。以下是一个 URL 例子:

'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'

'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'

  • href: The full URL that was originally parsed. Both the protocol and host are lowercased.

  • href: 所解析的完整原始 URL。协议名和主机名都已转为小写。

    例如: 'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'

  • protocol: The request protocol, lowercased.

  • protocol: 请求协议,小写

    例如: 'http:'

  • host: The full lowercased host portion of the URL, including port information.

  • host: URL主机名已全部转换成小写, 包括端口信息

    例如: 'host.com:8080'

  • auth: The authentication information portion of a URL.

  • auth:URL中身份验证信息部分

    例如: 'user:pass'

  • hostname: Just the lowercased hostname portion of the host.

  • hostname:主机的主机名部分, 已转换成小写

    例如: 'host.com'

  • port: The port number portion of the host.

  • port: 主机的端口号部分

    例如: '8080'

  • pathname: The path section of the URL, that comes after the host and before the query, including the initial slash if present.

  • pathname: URL的路径部分,位于主机名之后请求查询之前, including the initial slash if present.

    例如: '/p/a/t/h'

  • search: The 'query string' portion of the URL, including the leading question mark.

  • search: URL 的“查询字符串”部分,包括开头的问号。

    例如: '?query=string'

  • path: Concatenation of pathname and search.

  • path: pathnamesearch 连在一起。

    例如: '/p/a/t/h?query=string'

  • query: Either the 'params' portion of the query string, or a querystring-parsed object.

  • query: 查询字符串中的参数部分(问号后面部分字符串),或者使用 querystring.parse() 解析后返回的对象。

    例如: 'query=string' or {'query':'string'}

  • hash: The 'fragment' portion of the URL including the pound-sign.

  • hash: URL 的 “#” 后面部分(包括 # 符号)

    例如: '#hash'

The following methods are provided by the URL module:

以下是 URL 模块提供的方法:

url.parse(urlStr, [parseQueryString], [slashesDenoteHost])#

Take a URL string, and return an object.

输入 URL 字符串,返回一个对象。

Pass true as the second argument to also parse the query string using the querystring module. Defaults to false.

将第二个参数设置为 true 则使用 querystring 模块来解析 URL 中德查询字符串部分,默认为 false

Pass true as the third argument to treat //foo/bar as { host: 'foo', pathname: '/bar' } rather than { pathname: '//foo/bar' }. Defaults to false.

将第三个参数设置为 true 来把诸如 //foo/bar 这样的URL解析为 { host: 'foo', pathname: '/bar' } 而不是 { pathname: '//foo/bar' }。 默认为 false

url.format(urlObj)#

Take a parsed URL object, and return a formatted URL string.

输入一个 URL 对象,返回格式化后的 URL 字符串。

  • href will be ignored.
  • protocolis treated the same with or without the trailing : (colon).
    • The protocols http, https, ftp, gopher, file will be postfixed with :// (colon-slash-slash).
    • All other protocols mailto, xmpp, aim, sftp, foo, etc will be postfixed with : (colon)
  • auth will be used if present.
  • hostname will only be used if host is absent.
  • port will only be used if host is absent.
  • host will be used in place of hostname and port
  • pathname is treated the same with or without the leading / (slash)
  • search will be used in place of query
  • query (object; see querystring) will only be used if search is absent.
  • search is treated the same with or without the leading ? (question mark)
  • hash is treated the same with or without the leading # (pound sign, anchor)

  • href 属性会被忽略处理.

  • protocol无论是否有末尾的 : (冒号),会同样的处理
    • 这些协议包括 http, https, ftp, gopher, file 后缀是 :// (冒号-斜杠-斜杠).
    • 所有其他的协议如 mailto, xmpp, aim, sftp, foo, 等 会加上后缀 : (冒号)
  • auth 如果有将会出现.
  • hostname 如果 host 属性没被定义,则会使用此属性.
  • port 如果 host 属性没被定义,则会使用此属性.
  • host 优先使用,将会替代 hostnameport
  • pathname 将会同样处理无论结尾是否有/ (斜杠)
  • search 将会替代 query属性
  • query (object类型; 详细请看 querystring) 如果没有 search,将会使用此属性.
  • search 无论前面是否有 ? (问号),都会同样的处理
  • hash无论前面是否有# (井号, 锚点),都会同样处理

url.resolve(from, to)#

Take a base URL, and a href URL, and resolve them as a browser would for an anchor tag. Examples:

给定一个基础URL路径,和一个href URL路径,并且象浏览器那样处理他们可以带上锚点。 例子:

url.resolve('/one/two/three', 'four')         // '/one/two/four'
url.resolve('http://example.com/', '/one')    // 'http://example.com/one'
url.resolve('http://example.com/one', '/two') // 'http://example.com/two'


url.resolve('/one/two/three', 'four')         // '/one/two/four'
url.resolve('http://example.com/', '/one')    // 'http://example.com/one'
url.resolve('http://example.com/one', '/two') // 'http://example.com/two'

Query String#

稳定度: 3 - 稳定

This module provides utilities for dealing with query strings. It provides the following methods:

这个模块提供一些处理 query string 的工具。它提供下列方法:

querystring.stringify(obj, [sep], [eq])#

Serialize an object to a query string. Optionally override the default separator ('&') and assignment ('=') characters.

序列化一个对象到一个 query string。可以选择是否覆盖默认的分割符('&')和分配符('=')。

Example:

示例:

querystring.stringify({foo: 'bar', baz: 'qux'}, ';', ':')
// 返回如下字串
'foo:bar;baz:qux'

querystring.parse(str, [sep], [eq], [options])#

Deserialize a query string to an object. Optionally override the default separator ('&') and assignment ('=') characters.

将一个 query string 反序列化为一个对象。可以选择是否覆盖默认的分割符('&')和分配符('=')。

Options object may contain maxKeys property (equal to 1000 by default), it'll be used to limit processed keys. Set it to 0 to remove key count limitation.

options对象可能包含maxKeys属性(默认为1000),它可以用来限制处理过的键(key)的数量.设为0可以去除键(key)的数量限制.

Example:

示例:

querystring.parse('foo=bar&baz=qux&baz=quux&corge')
// returns
{ foo: 'bar', baz: ['qux', 'quux'], corge: '' }

querystring.escape#

The escape function used by querystring.stringify, provided so that it could be overridden if necessary.

querystring.stringify 使用的转意函数,在必要的时候可被重写。

querystring.unescape#

The unescape function used by querystring.parse, provided so that it could be overridden if necessary.

querystring.parse 使用的反转意函数,在必要的时候可被重写。

punycode#

稳定度: 2 - 不稳定

Punycode.js is bundled with Node.js v0.6.2+. Use require('punycode') to access it. (To use it with other Node.js versions, use npm to install the punycode module first.)

Punycode.js 自 Node.js v0.6.2+ 开始被内置,通过 require('punycode') 引入。(要在其它 Node.js 版本中使用它,请先使用 npm 安装 punycode 模块。)

punycode.decode(string)#

Converts a Punycode string of ASCII-only symbols to a string of Unicode symbols.

将一个纯 ASCII 符号的 Punycode 字符串转换为 Unicode 符号的字符串。

// 解码域名部分
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'

punycode.encode(string)#

Converts a string of Unicode symbols to a Punycode string of ASCII-only symbols.

将一个 Unicode 符号的字符串转换为纯 ASCII 符号的 Punycode 字符串。

// 编码域名部分
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'

punycode.toUnicode(domain)#

Converts a Punycode string representing a domain name to Unicode. Only the Punycoded parts of the domain name will be converted, i.e. it doesn't matter if you call it on a string that has already been converted to Unicode.

将一个表示域名的 Punycode 字符串转换为 Unicode。只有域名中的 Punycode 部分会转换,也就是说您在一个已经转换为 Unicode 的字符串上调用它也是没问题的。

// 解码域名
punycode.toUnicode('xn--maana-pta.com'); // 'mañana.com'
punycode.toUnicode('xn----dqo34k.com'); // '☃-⌘.com'

punycode.toASCII(domain)#

Converts a Unicode string representing a domain name to Punycode. Only the non-ASCII parts of the domain name will be converted, i.e. it doesn't matter if you call it with a domain that's already in ASCII.

将一个表示域名的 Unicode 字符串转换为 Punycode。只有域名的非 ASCII 部分会被转换,也就是说您在一个已经是 ASCII 的域名上调用它也是没问题的。

// 编码域名
punycode.toASCII('mañana.com'); // 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com'); // 'xn----dqo34k.com'

punycode.ucs2#

punycode.ucs2.decode(string)#

Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.

创建一个数组,包含字符串中每个 Unicode 符号的数字编码点。由于 JavaScript 在内部使用 UCS-2, 该函数会按照 UTF-16 将一对代半数(UCS-2 暴露的单独的字符)转换为单独一个编码点。

punycode.ucs2.decode('abc'); // [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 tetragram for centre:
punycode.ucs2.decode('\uD834\uDF06'); // [0x1D306]

punycode.ucs2.encode(codePoints)#

Creates a string based on an array of numeric code point values.

以数字编码点的值的数组创建一个字符串。

punycode.ucs2.encode([0x61, 0x62, 0x63]); // 'abc'
punycode.ucs2.encode([0x1D306]); // '\uD834\uDF06'

punycode.version#

A string representing the current Punycode.js version number.

表示当前 Punycode.js 版本号的字符串。

Readline#

稳定度: 2 - 不稳定

To use this module, do require('readline'). Readline allows reading of a stream (such as process.stdin) on a line-by-line basis.

要使用此模块,需要require('readline').Readline程序允许逐行读取一个流内容(例如process.stdin).

Note that once you've invoked this module, your node program will not terminate until you've closed the interface. Here's how to allow your program to gracefully exit:

需要注意的是你一旦调用了这个模块,你的node程序将不会终止直到你关闭此接口。下面是如何让你的程序正常退出的方法:

  rl.close();
});

readline.createInterface(options)#

Creates a readline Interface instance. Accepts an "options" Object that takes the following values:

创建一个readline的接口实例. 接受一个Object类型参数,可传递以下几个值:

  • input - the readable stream to listen to (Required).

  • input - 要监听的可读流 (必需).

  • output - the writable stream to write readline data to (Required).

  • output - 要写入 readline 的可写流 (必须).

  • completer - an optional function that is used for Tab autocompletion. See below for an example of using this.

  • completer - 用于 Tab 自动补全的可选函数。见下面使用的例子。

  • terminal - pass true if the input and output streams should be treated like a TTY, and have ANSI/VT100 escape codes written to it. Defaults to checking isTTY on the output stream upon instantiation.

  • terminal - 如果希望 inputoutput 流像 TTY 一样对待,那么传递参数 true ,并且经由 ANSI/VT100 转码。 默认情况下检查 isTTY 是否在 output 流上实例化。

The completer function is given a the current line entered by the user, and is supposed to return an Array with 2 entries:

通过用户 completer 函数给定了一个当前行入口,并且期望返回一个包含两个条目的数组:

  1. An Array with matching entries for the completion.

  2. 一个匹配当前输入补全的字符串数组.

  3. The substring that was used for the matching.

  4. 一个用于匹配的子字符串。

Which ends up looking something like: [[substr1, substr2, ...], originalsubstring].

最终像这种形式: [[substr1, substr2, ...], originalsubstring].

Example:

示例:

function completer(line) {
  var completions = '.help .error .exit .quit .q'.split(' ')
  var hits = completions.filter(function(c) { return c.indexOf(line) == 0 })
  // show all completions if none found
  return [hits.length ? hits : completions, line]
}

Also completer can be run in async mode if it accepts two arguments:

completer 也可以运行在异步模式下,此时接受两个参数:

function completer(linePartial, callback) {
  callback(null, [['123'], linePartial]);
}

createInterface is commonly used with process.stdin and process.stdout in order to accept user input:

为了接受用户的输入,createInterface 通常跟 process.stdinprocess.stdout 一块使用:

var readline = require('readline');
var rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout
});

Once you have a readline instance, you most commonly listen for the "line" event.

一旦你有一个 readline 实例,你通常会监听 "line" 事件。

If terminal is true for this instance then the output stream will get the best compatibility if it defines an output.columns property, and fires a "resize" event on the output if/when the columns ever change (process.stdout does this automatically when it is a TTY).

如果这个实例中terminaltrue,而且output流定义了一个output.columns属性,那么output流将获得最好的兼容性,并且,当columns变化时(当它是TTY时,process.stdout会自动这样做),会在output上触发一个 "resize"事件。

类: 接口#

The class that represents a readline interface with an input and output stream.

代表一个有输入输出流的 readline 接口的类。

rl.setPrompt(prompt)#

Sets the prompt, for example when you run node on the command line, you see > , which is node's prompt.

设置提示符,例如当你在命令行运行 node 时,你会看到 > ,这就是 node 的提示符。

rl.prompt([preserveCursor])#

Readies readline for input from the user, putting the current setPrompt options on a new line, giving the user a new spot to write. Set preserveCursor to true to prevent the cursor placement being reset to 0.

为用户输入准备好readline,将现有的setPrompt选项放到新的一行,让用户有一个新的地方开始输入。将preserveCursor设为true来防止光标位置被重新设定成0

This will also resume the input stream used with createInterface if it has been paused.

如果暂停,也会使用 createInterface 重置 input 流。

rl.question(query, callback)#

Prepends the prompt with query and invokes callback with the user's response. Displays the query to the user, and then invokes callback with the user's response after it has been typed.

预处理 query提示 ,用户应答时调用 callback . 当类型被确定后,将查询结果显示给用户, 然后在用户应答时调用 callback.

This will also resume the input stream used with createInterface if it has been paused.

如果暂停,也会使用 createInterface 重置 input 流。

Example usage:

使用示例:

interface.question('What is your favorite food?', function(answer) {
  console.log('Oh, so your favorite food is ' + answer);
});

rl.pause()#

Pauses the readline input stream, allowing it to be resumed later if needed.

暂停 readline 的输入流 (input stream), 如果有需要稍后还可以恢复。

rl.resume()#

Resumes the readline input stream.

恢复 readline 的输入流 (input stream).

rl.close()#

Closes the Interface instance, relinquishing control on the input and output streams. The "close" event will also be emitted.

关闭接口实例 (Interface instance), 放弃控制输入输出流。"close" 事件会被触发。

rl.write(data, [key])#

Writes data to output stream. key is an object literal to represent a key sequence; available if the terminal is a TTY.

data 写入到 output 流。key 是一个代表键序列的对象;当终端是一个 TTY 时可用。

This will also resume the input stream if it has been paused.

如果暂停,也会重置 input 流。

Example:

示例:

rl.write('Delete me!');
// 模仿 ctrl+u快捷键,删除之前所写行 
rl.write(null, {ctrl: true, name: 'u'});

Events#

Event: 'line'#

function (line) {}

function (line) {}

Emitted whenever the input stream receives a \n, usually received when the user hits enter, or return. This is a good hook to listen for user input.

input 流接受了一个 \n 时触发,通常在用户敲击回车或者返回时接收。 这是一个监听用户输入的利器。

Example of listening for line:

监听 line 事件的示例:

rl.on('line', function (cmd) {
  console.log('You just typed: '+cmd);
});

事件: 'pause'#

function () {}

function () {}

Emitted whenever the input stream is paused.

不论何时,只要输入流被暂停就会触发。

Also emitted whenever the input stream is not paused and receives the SIGCONT event. (See events SIGTSTP and SIGCONT)

而在输入流未被暂停,但收到 SIGCONT 信号时也会触发。 (详见 SIGTSTPSIGCONT 事件)

Example of listening for pause:

监听 pause 事件的示例:

rl.on('pause', function() {
  console.log('Readline 输入暂停.');
});

事件: 'resume'#

function () {}

function () {}

Emitted whenever the input stream is resumed.

不论何时,只要输入流重新启用就会触发。

Example of listening for resume:

监听 resume 事件的示例:

rl.on('resume', function() {
  console.log('Readline 恢复.');
});

事件: 'close'#

function () {}

function () {}

Emitted when close() is called.

close() 被调用时触发。

Also emitted when the input stream receives its "end" event. The Interface instance should be considered "finished" once this is emitted. For example, when the input stream receives ^D, respectively known as EOT.

input流接收到"结束"事件时也会被触发. 一旦触发,应当认为Interface实例 "结束" . 例如, 当input流接收到^D时, 分别被认为EOT.

This event is also called if there is no SIGINT event listener present when the input stream receives a ^C, respectively known as SIGINT.

input 流接收到一个 ^C 时,即使没有 SIGINT 监听器,也会触发这个事件,分别被称为 SIGINT

Event: 'SIGINT'#

function () {}

function () {}

Emitted whenever the input stream receives a ^C, respectively known as SIGINT. If there is no SIGINT event listener present when the input stream receives a SIGINT, pause will be triggered.

只要 input流 接收到^C就会被触发, 分别被认为SIGINT.当input流接收到SIGINT时, 如果没有 SIGINT 事件监听器,pause 将会被触发.

Example of listening for SIGINT:

监听 SIGINT 信号的示例:

rl.on('SIGINT', function() {
  rl.question('Are you sure you want to exit?', function(answer) {
    if (answer.match(/^y(es)?$/i)) rl.pause();
  });
});

Event: 'SIGTSTP'#

function () {}

function () {}

This does not work on Windows.

该功能不支持 windows 操作系统

Emitted whenever the input stream receives a ^Z, respectively known as SIGTSTP. If there is no SIGTSTP event listener present when the input stream receives a SIGTSTP, the program will be sent to the background.

只要input流接收到^Z时就被触发, 分别被认为SIGTSTP. 当input流接收到 SIGTSTP时,如果没有SIGTSTP 事件监听器 ,程序会被发送到后台 .

When the program is resumed with fg, the pause and SIGCONT events will be emitted. You can use either to resume the stream.

当程序使用参数 fg 重启,pauseSIGCONT 事件将会被触发。 你可以使用两者中任一事件来恢复流。

The pause and SIGCONT events will not be triggered if the stream was paused before the program was sent to the background.

在程序被发送到后台之前,如果流暂停,pauseSIGCONT 事件将不会被触发。

Example of listening for SIGTSTP:

监听 SIGTSTP 的示例:

rl.on('SIGTSTP', function() {
  // 这将重载 SIGTSTP并防止程序转到
  // 后台.
  console.log('Caught SIGTSTP.');
});

Event: 'SIGCONT'#

function () {}

function () {}

This does not work on Windows.

该功能不支持 windows 操作系统

Emitted whenever the input stream is sent to the background with ^Z, respectively known as SIGTSTP, and then continued with fg(1). This event only emits if the stream was not paused before sending the program to the background.

一旦 input流中含有 ^Z并被发送到后台就会触发,分别被认为 SIGTSTP, 然后继续执行fg(1). 这一事件只有在流被发送后台之前没有暂停才会触发.

Example of listening for SIGCONT:

监听 SIGCONT 的示例:

rl.on('SIGCONT', function() {
  // `prompt` 将会自动恢复流
  rl.prompt();
});

示例: Tiny CLI#

Here's an example of how to use all these together to craft a tiny command line interface:

这里有一个使用所有方法精心设计的小命令行程序:

rl.on('line', function(line) {
  switch(line.trim()) {
    case 'hello':
      console.log('world!');
      break;
    default:
      console.log('Say what? I might have heard `' + line.trim() + '`');
      break;
  }
  rl.prompt();
}).on('close', function() {
  console.log('Have a great day!');
  process.exit(0);
});

REPL#

稳定度: 3 - 稳定

A Read-Eval-Print-Loop (REPL) is available both as a standalone program and easily includable in other programs. The REPL provides a way to interactively run JavaScript and see the results. It can be used for debugging, testing, or just trying things out.

一个 Read-Eval-Print-Loop(REPL,读取-执行-输出循环)既可用于独立程序也可很容易地被集成到其它程序中。REPL 提供了一种交互地执行 JavaScript 并查看输出的方式。它可以被用作调试、测试或仅仅尝试某些东西。

By executing node without any arguments from the command-line you will be dropped into the REPL. It has simplistic emacs line-editing.

在命令行中不带任何参数执行 node 您便会进入 REPL。它提供了一个简单的 Emacs 行编辑。

mjr:~$ node
Type '.help' for options.
> a = [ 1, 2, 3];
[ 1, 2, 3 ]
> a.forEach(function (v) {
...   console.log(v);
...   });
1
2
3

For advanced line-editors, start node with the environmental variable NODE_NO_READLINE=1. This will start the main and debugger REPL in canonical terminal settings which will allow you to use with rlwrap.

若想使用高级的编辑模式,设置环境变量 NODE_NO_READLINE=1 后运行 node。这将在允许你在可以使用 rlwrap 的终端上,启动高级的 REPL 模式 (the main and debugger REPL)。

For example, you could add this to your bashrc file:

例如,您可以将下列代码加入到您的 bashrc 文件:

alias node="env NODE_NO_READLINE=1 rlwrap node"

repl.start(options)#

Returns and starts a REPLServer instance. Accepts an "options" Object that takes the following values:

启动并返回一个 REPLServer 实例。接受一个包含如下内容的 "options" 对象:

  • prompt - the prompt and stream for all I/O. Defaults to > .

  • prompt - 所有输入输出的提示符。默认是 > .

  • input - the readable stream to listen to. Defaults to process.stdin.

  • input - 监听的可读流。默认指向标准输入流 process.stdin

  • output - the writable stream to write readline data to. Defaults to process.stdout.

  • output - 用来输出数据的可写流。默认指向标准输出流 process.stdout

  • terminal - pass true if the stream should be treated like a TTY, and have ANSI/VT100 escape codes written to it. Defaults to checking isTTY on the output stream upon instantiation.

  • terminal - 如果 stream 应该被当做 TTY 来对待并且有 ANSI/VT100 转义时,则传 true。 默认使用 output 实例的 isTTY来检查。

  • eval - function that will be used to eval each given line. Defaults to an async wrapper for eval(). See below for an example of a custom eval.

  • eval - 用来对每一行进行求值的函数。 默认为eval()的一个异步包装函数。下面给出一个自定义eval的例子。

  • useColors - a boolean which specifies whether or not the writer function should output colors. If a different writer function is set then this does nothing. Defaults to the repl's terminal value.

  • useColors - 一个布尔值,表明了writer函数是否会输出颜色。如果设定了一个不同的writer函数,那么这不会产生任何影响。默认为repl的terminal值。

  • useGlobal - if set to true, then the repl will use the global object, instead of running scripts in a separate context. Defaults to false.

  • useGlobal - 如果设定为true,那么repl就会使用global对象而不是在一个独立环境里运行脚本。默认为false

  • ignoreUndefined - if set to true, then the repl will not output the return value of command if it's undefined. Defaults to false.

  • ignoreUndefined - 如果设定为true,那么repl将不会输出未定义命令的返回值。默认为false

  • writer - the function to invoke for each command that gets evaluated which returns the formatting (including coloring) to display. Defaults to util.inspect.

  • writer - 每一个命令被求值时都会调用此函数,而该函数会返回显示的格式(包括颜色)。默认为util.inspectutil.inspect.

You can use your own eval function if it has following signature:

你可以使用你自己的eval函数,只有它有如下的签名:

function eval(cmd, context, filename, callback) {
  callback(null, result);
}

Multiple REPLs may be started against the same running instance of node. Each will share the same global object but will have unique I/O.

多个REPL可以在同一个运行的节点实例上打开。它们共享同一个global对象,但分别有各自的I/O。

Here is an example that starts a REPL on stdin, a Unix socket, and a TCP socket:

以下是通过标准输入流(stdin)、Unix socket 以及 TCP socket 三种情况来启动 REPL 的例子:

net.createServer(function (socket) {
  connections += 1;
  repl.start({
    prompt: "node via TCP socket> ",
    input: socket,
    output: socket
  }).on('exit', function() {
    socket.end();
  });
}).listen(5001);

Running this program from the command line will start a REPL on stdin. Other REPL clients may connect through the Unix socket or TCP socket. telnet is useful for connecting to TCP sockets, and socat can be used to connect to both Unix and TCP sockets.

从命令行运行该程序,将会从标准输入流启动 REPL 模式。 其他的 REPL 客户端也可以通过 Unix socket 或者 TCP socket 连接。 telnet 常用于连接 TCP sockets,而 socat 则可以同时用来连接 Unix 和 TCP sockets。

By starting a REPL from a Unix socket-based server instead of stdin, you can connect to a long-running node process without restarting it.

通过从一个Unix的套接字服务器而不是stdin来启动REPL, 你可以连接到一个长久运行的node进程而不不需要重启。

For an example of running a "full-featured" (terminal) REPL over a net.Server and net.Socket instance, see: https://gist.github.com/2209310

一个在net.Servernet.Socket实例上运行的"全功能"(terminal)REPL的例子可以查看这里: https://gist.github.com/2209310

For an example of running a REPL instance over curl(1), see: https://gist.github.com/2053342

一个在curl(1)上运行的REPL实例的例子可以查看这里: https://gist.github.com/2053342

事件: 'exit'#

function () {}

function () {}

Emitted when the user exits the REPL in any of the defined ways. Namely, typing .exit at the repl, pressing Ctrl+C twice to signal SIGINT, or pressing Ctrl+D to signal "end" on the input stream.

当用户通过任意预定义的方式退出REPL,该事件被分发。比如,在repl里输入.exit,按Ctrl+C两次来发送SIGINT信号,或者在input流上按Ctrl+D来发送"end"。

Example of listening for exit:

监听 exit 事件的例子:

r.on('exit', function () {
  console.log('从 REPL 得到 "exit" 事件!');
  process.exit();
});

事件: 'reset'#

function (context) {}

function (context) {}

Emitted when the REPL's context is reset. This happens when you type .clear. If you start the repl with { useGlobal: true } then this event will never be emitted.

当REPL的上下文被重置时,该事件被分发。当你打.clear命令时这种情况就会发生。如果你以{ useGlobal: true }来启动repl,那么这个事件就永远不会被分发。

Example of listening for reset:

监听reset的例子:

// 当一个新的上下文被创建时,扩充这个上下文。
r.on('reset', function (context) {
  console.log('repl有一个新的上下文');
  someExtension.extend(context);
});

REPL 特性#

Inside the REPL, Control+D will exit. Multi-line expressions can be input. Tab completion is supported for both global and local variables.

在REPL里,Control+D会退出。可以输入多行表达式。对于全局变量和本地变量都支持自动缩进。

The special variable _ (underscore) contains the result of the last expression.

特殊变量 _ (下划线)储存了上一个表达式的结果。

> [ "a", "b", "c" ]
[ 'a', 'b', 'c' ]
> _.length
3
> _ += 1
4

The REPL provides access to any variables in the global scope. You can expose a variable to the REPL explicitly by assigning it to the context object associated with each REPLServer. For example:

REPL提供了访问global域里所有变量的权限。通过将一个变量赋值给与每一个REPLServer关联的context对象,你可以显式地将一个变量暴露给REPL。例如:

repl.start("> ").context.m = msg;

Things in the context object appear as local within the REPL:

context对象里的东西,会在REPL以本地变量的形式出现。

mjr:~$ node repl_test.js
> m
'message'

There are a few special REPL commands:

有几个特殊的REPL命令:

  • .break - While inputting a multi-line expression, sometimes you get lost or just don't care about completing it. .break will start over.
  • .clear - Resets the context object to an empty object and clears any multi-line expression.
  • .exit - Close the I/O stream, which will cause the REPL to exit.
  • .help - Show this list of special commands.
  • .save - Save the current REPL session to a file

    .save ./file/to/save.js

  • .load - Load a file into the current REPL session.

    .load ./file/to/load.js

  • .break - 当你输入一个多行表达式时,有时你会脑子突然断路或者你不想完成这个表达式了。break让你可以重头再来。

  • .clear - 重置context对象为一个空对象,并且清除所有的多行表达式。
  • .exit - 关闭I/O流,使得REPL退出。
  • .help - 显示这个特殊命令的列表。
  • .save - 将当前的REPL会话保存到一个文件

    .save ./file/to/save.js

  • .load - 将一个文件装载到当前的REPL会话。

    .load ./file/to/load.js

The following key combinations in the REPL have these special effects:

下面的组合键在REPL中有以下效果:

  • <ctrl>C - Similar to the .break keyword. Terminates the current command. Press twice on a blank line to forcibly exit.
  • <ctrl>D - Similar to the .exit keyword.

  • <ctrl>C - 与.break关键字类似。终止正在执行的命令。在一个空行连按两次会强制退出。

  • <ctrl>D - 与.exit关键字类似。

    执行 JavaScript#

    稳定度: 3 - 稳定

You can access this module with:

你可以这样引入此模块:

var vm = require('vm');

JavaScript code can be compiled and run immediately or compiled, saved, and run later.

JavaScript 代码可以被编译并立即执行,也可以在编译后保存,留到稍后执行。

vm.runInThisContext(code, [options])#

vm.runInThisContext() compiles code, runs it and returns the result. Running code does not have access to local scope, but does have access to the current global object.

vm.runInThisContext()code 进行编译、运行并返回结果。 被运行的代码没有对本地作用域 (local scope) 的访问权限,但是可以访问当前的 global 对象。

Example of using vm.runInThisContext and eval to run the same code:

使用 vm.runInThisContexteval 分别执行相同的代码:

// vmResult: 'vm', localVar: 'initial value'
// evalResult: 'eval', localVar: 'eval'

vm.runInThisContext does not have access to the local scope, so localVar is unchanged. eval does have access to the local scope, so localVar is changed.

vm.runInThisContext 无法访问本地作用域,因此 localVar 没有被改变。 eval 可以访问本地作用域,因此 localVar 被改变。

In this way vm.runInThisContext is much like an indirect eval call, e.g. (0,eval)('code'). However, it also has the following additional options:

这种情况下 vm.runInThisContext 可以看作一种 间接的 eval 调用, 如 (0,eval)('code')。但是 vm.runInThisContext 也提供下面几个额外的参数:

  • filename: allows you to control the filename that shows up in any stack traces produced.
  • displayErrors: whether or not to print any errors to stderr, with the line of code that caused them highlighted, before throwing an exception. Will capture both syntax errors from compiling code and runtime errors thrown by executing the compiled code. Defaults to true.
  • timeout: a number of milliseconds to execute code before terminating execution. If execution is terminated, an Error will be thrown.

  • filename: 允许您更改显示在站追踪 (stack trace) 中的文件名

  • displayErrors: 是否在抛出异常前输出带高亮错误代码行的错误信息到 stderr。 将会捕捉所有在编译 code 的过程中产生的语法错误以及执行过程中产生的运行时错误。 默认为 true
  • timeout: 以毫秒为单位规定 code 允许执行的时间。在执行过程中被终止时会有 Error 抛出。

vm.createContext([sandbox])#

If given a sandbox object, will "contextify" that sandbox so that it can be used in calls to vm.runInContext or script.runInContext. Inside scripts run as such, sandbox will be the global object, retaining all its existing properties but also having the built-in objects and functions any standard global object has. Outside of scripts run by the vm module, sandbox will be unchanged.

如提供 sandbox 对象则将沙箱 (sandbox) 对象 “上下文化 (contextify)” 供 vm.runInContext 或者 script.runInContext 使用。 以此方式运行的脚本将以 sandbox 作为全局对象,该对象将在保留其所有的属性的基础上拥有标准 全局对象 所拥有的内置对象和函数。 在由 vm 模块运行的脚本之外的地方 sandbox 将不会被改变。

If not given a sandbox object, returns a new, empty contextified sandbox object you can use.

如果没有提供沙箱对象,则返回一个新建的、没有任何对象被上下文化的可用沙箱。

This function is useful for creating a sandbox that can be used to run multiple scripts, e.g. if you were emulating a web browser it could be used to create a single sandbox representing a window's global object, then run all <script> tags together inside that sandbox.

此函数可用于创建可执行多个脚本的沙箱, 比如,在模拟浏览器的时候可以使用该函数创建一个用于表示 window 全局对象的沙箱, 并将所有 <script> 标签放入沙箱执行。

vm.isContext(sandbox)#

Returns whether or not a sandbox object has been contextified by calling vm.createContext on it.

返回沙箱对象是否已经通过 vm.createContext 上下文化 (contextified)

vm.runInContext(code, contextifiedSandbox, [options])#

vm.runInContext compiles code, then runs it in contextifiedSandbox and returns the result. Running code does not have access to local scope. The contextifiedSandbox object must have been previously contextified via vm.createContext; it will be used as the global object for code.

vm.runInContext 编译 code 放入 contextifiedSandbox 执行并返回执行结果。 被执行的代码对 本地作用域 (local scope) 没有访问权。 contextifiedSandbox 必须在使用前通过 vm.createContext 上下文化,用作 code 的全局对象。

vm.runInContext takes the same options as vm.runInThisContext.

vm.runInContext 使用与 vm.runInThisContext 相同的 选项 (options)

Example: compile and execute differnt scripts in a single existing context.

示例:在同一个上下文中编译并执行不同的脚本

// { globalVar: 1024 }

Note that running untrusted code is a tricky business requiring great care. vm.runInContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 vm.runInContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

vm.runInNewContext(code, [sandbox], [options])#

vm.runInNewContext compiles code, contextifies sandbox if passed or creates a new contextified sandbox if it's omitted, and then runs the code with the sandbox as the global object and returns the result.

vm.runInNewContext 首先编译 code,若提供 sandbox 则将 sandbox 上下文化,若未提供则创建一个新的沙箱并上下文化, 然后将代码放入沙箱作为全局对象的上下文内执行并返回结果。

vm.runInNewContext takes the same options as vm.runInThisContext.

vm.runInNewContext 使用与 vm.runInThisContext 相同的 选项 (options)

Example: compile and execute code that increments a global variable and sets a new one. These globals are contained in the sandbox.

示例: 编译并执行一段“自增一个全局变量然后创建一个全局变量”的代码。这些被操作的全局变量会被保存在沙箱中。

// { animal: 'cat', count: 3, name: 'kitty' }

Note that running untrusted code is a tricky business requiring great care. vm.runInNewContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 vm.runInNewContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

类: Script#

A class for holding precompiled scripts, and running them in specific sandboxes.

用于存放预编译脚本的类,可将预编译代码放入沙箱执行。

new vm.Script(code, options)#

Creating a new Script compiles code but does not run it. Instead, the created vm.Script object represents this compiled code. This script can be run later many times using methods below. The returned script is not bound to any global object. It is bound before each run, just for that run.

创建一个新的 Script 用于编译 code 但是不执行。使用被创建的 vm.Script 用来表示完成编译的代码。 这份可以在后面的代码中执行多次。 返回的脚本是未绑定任何全局对象 (上下文 context) 的,全局对象仅在每一次执行的时候被绑定,执行结束后即释放绑定。

The options when creating a script are:

创建脚本的选项 (option) 有:

  • filename: allows you to control the filename that shows up in any stack traces produced from this script.
  • displayErrors: whether or not to print any errors to stderr, with the line of code that caused them highlighted, before throwing an exception. Applies only to syntax errors compiling the code; errors while running the code are controlled by the options to the script's methods.

  • filename: 允许您更改显示在站追踪 (stack trace) 中的文件名

  • displayErrors: 是否在抛出异常前输出带高亮错误代码行的错误信息到 stderr。 仅捕捉所有在编译过程中产生的语法错误(运行时错误由运行脚本选项控制)。

script.runInThisContext([options])#

Similar to vm.runInThisContext but a method of a precompiled Script object. script.runInThisContext runs script's compiled code and returns the result. Running code does not have access to local scope, but does have access to the current global object.

类似 vm.runInThisContext 只是作为预编译的 Script 对象方法。 script.runInThisContext 执行被编译的 script 并返回结果。 被运行的代码没有对本地作用域 (local scope) 的访问权限,但是可以访问当前的 global 对象。

Example of using script.runInThisContext to compile code once and run it multiple times:

示例: 使用 script.runInThisContext 编译代码并多次执行:

// 1000

The options for running a script are:

运行脚本的选项 (option) 有:

  • displayErrors: whether or not to print any runtime errors to stderr, with the line of code that caused them highlighted, before throwing an exception. Applies only to runtime errors executing the code; it is impossible to create a Script instance with syntax errors, as the constructor will throw.
  • timeout: a number of milliseconds to execute the script before terminating execution. If execution is terminated, an Error will be thrown.

  • displayErrors: 是否在抛出异常前输出带高亮错误代码行的错误信息到 stderr。 仅捕捉所有执行过程中产生的运行时错误(语法错误会在 Script 示例创建时就发生,因此不可能创建出带语法错误的 Script 对象)。

  • timeout: 以毫秒为单位规定 code 允许执行的时间。在执行过程中被终止时会有 Error 抛出。

script.runInContext(contextifiedSandbox, [options])#

Similar to vm.runInContext but a method of a precompiled Script object. script.runInContext runs script's compiled code in contextifiedSandbox and returns the result. Running code does not have access to local scope.

类似 vm.runInContext 只是作为预编译的 Script 对象方法。 script.runInContextcontextifiedSandbox 中执行 script 编译出的代码,并返回结果。 被运行的代码没有对本地作用域 (local scope) 的访问权限。

script.runInContext takes the same options as script.runInThisContext.

script.runInContext 使用与 script.runInThisContext 相同的 选项 (option)。

Example: compile code that increments a global variable and sets one, then execute the code multiple times. These globals are contained in the sandbox.

示例: 编译一段“自增一个全局变量然后创建一个全局变量”的代码,然后多次执行此代码, 被操作的全局变量会被保存在沙箱中。

// { animal: 'cat', count: 12, name: 'kitty' }

Note that running untrusted code is a tricky business requiring great care. script.runInContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 script.runInContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

script.runInNewContext([sandbox], [options])#

Similar to vm.runInNewContext but a method of a precompiled Script object. script.runInNewContext contextifies sandbox if passed or creates a new contextified sandbox if it's omitted, and then runs script's compiled code with the sandbox as the global object and returns the result. Running code does not have access to local scope.

类似 vm.runInNewContext 但是作为预编译的 Script 对象方法。 若提供 sandboxscript.runInNewContextsandbox 上下文化,若未提供,则创建一个新的上下文化的沙箱, 然后将代码放入沙箱作为全局对象的上下文内执行并返回结果。

script.runInNewContext takes the same options as script.runInThisContext.

script.runInNewContext 使用与 script.runInThisContext 相同的 选项 (option)。

Example: compile code that sets a global variable, then execute the code multiple times in different contexts. These globals are set on and contained in the sandboxes.

示例: 编译一段“写入一个全局变量”的代码,然后将代码放入不同的上下文 (context) 执行,这些被操作的全局变量会被保存在沙箱中。

// [{ globalVar: 'set' }, { globalVar: 'set' }, { globalVar: 'set' }]

Note that running untrusted code is a tricky business requiring great care. script.runInNewContext is quite useful, but safely running untrusted code requires a separate process.

执行不可信代码 (untrusted code) 是一件充满技巧而且需要非常小心的工作。 script.runInNewContext 十分好用,但是安全地运行不可信代码还需要将这些代码放入单独的进程里面执行。

子进程#

稳定度: 3 - 稳定

Node provides a tri-directional popen(3) facility through the child_process module.

Node 通过 child_process 模块提供了类似 popen(3) 的处理三向数据流(stdin/stdout/stderr)的功能。

It is possible to stream data through a child's stdin, stdout, and stderr in a fully non-blocking way. (Note that some programs use line-buffered I/O internally. That doesn't affect node.js but it means data you send to the child process is not immediately consumed.)

它能够以完全非阻塞的方式与子进程的 stdinstdoutstderr 以流式传递数据。(请注意,某些程序在内部使用行缓冲 I/O。这不会影响到 node.js,但您发送到子进程的数据不会被立即消费。)

To create a child process use require('child_process').spawn() or require('child_process').fork(). The semantics of each are slightly different, and explained below.

使用 require('child_process').spawn()require('child_process').fork() 创建子进程。这两种方法的语义有些区别,下文将会解释。

类: ChildProcess#

ChildProcess is an EventEmitter.

ChildProcess 是一个 EventEmitter

Child processes always have three streams associated with them. child.stdin, child.stdout, and child.stderr. These may be shared with the stdio streams of the parent process, or they may be separate stream objects which can be piped to and from.

子进程有三个与之关联的流:child.stdinchild.stdoutchild.stderr。它们可以共享父进程的 stdio 流,也可以作为独立的被导流的流对象。

The ChildProcess class is not intended to be used directly. Use the spawn() or fork() methods to create a Child Process instance.

ChildProcess 类不能直接被使用, 使用 spawn() 或者 fork() 方法创建一个 Child Process 实例。

事件: 'error'#

  • err Error Object the error.

  • err Error Object 错误。

Emitted when:

发生于:

  1. The process could not be spawned, or
  2. The process could not be killed, or
  3. Sending a message to the child process failed for whatever reason.
  1. 进程不能被创建, 或者
  2. 进程不能被终止掉, 或者
  3. 由任何原因引起的数据发送到子进程失败.

See also ChildProcess#kill() and ChildProcess#send().

参阅 ChildProcess#kill()ChildProcess#send()

事件: 'exit'#

  • code Number the exit code, if it exited normally.
  • signal String the signal passed to kill the child process, if it was killed by the parent.

  • code Number 假如进程正常退出,则为它的退出代码。

  • signal String 假如是被父进程终止,则为所传入的终止子进程的信号。

This event is emitted after the child process ends. If the process terminated normally, code is the final exit code of the process, otherwise null. If the process terminated due to receipt of a signal, signal is the string name of the signal, otherwise null.

这个事件是在子进程被结束的时候触发的. 假如进程被正常结束,‘code’就是退出进程的指令代码, 否则为'null'. 假如进程是由于接受到signal结束的, signal 就代表着信号的名称, 否则为null.

Note that the child process stdio streams might still be open.

注意子进程的 stdio 流可能仍为开启状态。

See waitpid(2).

参阅waitpid(2).

事件: 'close'#

  • code Number the exit code, if it exited normally.
  • signal String the signal passed to kill the child process, if it was killed by the parent.

  • code Number 假如进程正常退出,则为它的退出代码。

  • signal String 假如是被父进程终止,则为所传入的终止子进程的信号。

This event is emitted when the stdio streams of a child process have all terminated. This is distinct from 'exit', since multiple processes might share the same stdio streams.

这个事件会在一个子进程的所有stdio流被终止时触发, 这和'exit'事件有明显的不同,因为多进程有时候会共享同一个stdio流

事件: 'disconnect'#

This event is emitted after using the .disconnect() method in the parent or in the child. After disconnecting it is no longer possible to send messages. An alternative way to check if you can send messages is to see if the child.connected property is true.

在子进程或父进程中使用使用.disconnect()方法后,这个事件会被触发,在断开之后,就不可能再相互发送信息了。可以通过检查子进程的child.connected属性是否为true去检查是否可以发送信息

事件: 'message'#

  • message Object a parsed JSON object or primitive value
  • sendHandle Handle object a Socket or Server object

  • message Object 一个已解析的JSON对象或者原始类型值

  • sendHandle Handle object 一个socket 或者 server对象

Messages send by .send(message, [sendHandle]) are obtained using the message event.

通过.send()发送的信息可以通过监听'message'事件获取到

child.stdin#

  • Stream object

  • Stream object

A Writable Stream that represents the child process's stdin. Closing this stream via end() often causes the child process to terminate.

子进程的'stdin'是一个‘可写流’,通过end()方法关闭该可写流可以终止子进程,

If the child stdio streams are shared with the parent, then this will not be set.

假如子进程的stdio流与父线程共享,这个child.stdin不会被设置

child.stdout#

  • Stream object

  • Stream object

A Readable Stream that represents the child process's stdout.

子进程的stdout是个可读流.

If the child stdio streams are shared with the parent, then this will not be set.

假如子进程的stdio流与父线程共享,这个child.stdin不会被设置

child.stderr#

  • Stream object

  • Stream object

A Readable Stream that represents the child process's stderr.

子进程的stderr是一个可读流

If the child stdio streams are shared with the parent, then this will not be set.

假如子进程的stdio流与父线程共享,这个child.stdin不会被设置

child.pid#

  • Integer

  • Integer

The PID of the child process.

子进程的PID

Example:

示例:

console.log('Spawned child pid: ' + grep.pid);
grep.stdin.end();

child.kill([signal])#

  • signal String

  • signal String

Send a signal to the child process. If no argument is given, the process will be sent 'SIGTERM'. See signal(7) for a list of available signals.

发送一个信号给子线程. 假如没有给参数, 将会发送 'SIGTERM'. 参阅 signal(7) 查看所有可用的signals列表

// send SIGHUP to process
grep.kill('SIGHUP');

May emit an 'error' event when the signal cannot be delivered. Sending a signal to a child process that has already exited is not an error but may have unforeseen consequences: if the PID (the process ID) has been reassigned to another process, the signal will be delivered to that process instead. What happens next is anyone's guess.

当一个signal不能被传递的时候,会触发一个'error'事件, 发送一个信号到已终止的子线程不会发生错误,但是可能引起不可预见的后果, 假如该子进程的ID已经重新分配给了其他进程,signal将会被发送到其他进程上面,大家可以猜想到这发生什么后果。

Note that while the function is called kill, the signal delivered to the child process may not actually kill it. kill really just sends a signal to a process.

注意,当函数调用‘kill’, 传递给子进程的信号不会去终结子进程, ‘kill’实际上只是发送一个信号到进程而已。

See kill(2)

See kill(2)

child.send(message, [sendHandle])#

  • message Object
  • sendHandle Handle object

  • message Object

  • sendHandle Handle object

When using child_process.fork() you can write to the child using child.send(message, [sendHandle]) and messages are received by a 'message' event on the child.

当使用 child_process.fork() 你可以使用 child.send(message, [sendHandle])向子进程写数据 and 数据将通过子进程上的‘message’事件接受.

For example:

例如:

n.send({ hello: 'world' });

And then the child script, 'sub.js' might look like this:

然后是子进程脚本的代码, 'sub.js' 代码如下:

process.send({ foo: 'bar' });

In the child the process object will have a send() method, and process will emit objects each time it receives a message on its channel.

在子进程脚本中'process'对象有‘send()’方法, ‘process’每次通过它的信道接收到信息都会触发事件,信息以对象形式返回。

There is a special case when sending a {cmd: 'NODE_foo'} message. All messages containing a NODE_ prefix in its cmd property will not be emitted in the message event, since they are internal messages used by node core. Messages containing the prefix are emitted in the internalMessage event, you should by all means avoid using this feature, it is subject to change without notice.

不过发送{cmd: 'NODE_foo'} 信息是个比较特殊的情况. 所有在‘cmd’属性中包含 a NODE_前缀的信息将不会触发‘message’事件, 因为他们是由node 核心使用的内部信息. 相反这种信息会触发 internalMessage 事件, 你应该通过各种方法避免使用这种特性, 他改变的时候不会接收到通知.

The sendHandle option to child.send() is for sending a TCP server or socket object to another process. The child will receive the object as its second argument to the message event.

child.send()sendHandle 选项是用来发送一个TCP服务或者socket对象到另一个线程的,子进程将会接收这个参数作为‘message’事件的第二个参数。

Emits an 'error' event if the message cannot be sent, for example because the child process has already exited.

假如信息不能被发送,将会触发一个‘error’事件, 比如说因为子线程已经退出了。

例子: 发送一个server对象#

Here is an example of sending a server:

这里是一个发送一个server对象的例子:

// 创建一个handle对象,发送一个句柄.
var server = require('net').createServer();
server.on('connection', function (socket) {
  socket.end('handled by parent');
});
server.listen(1337, function() {
  child.send('server', server);
});

And the child would the receive the server object as:

同时子进程将会以如下方式接收到这个server对象:

process.on('message', function(m, server) {
  if (m === 'server') {
    server.on('connection', function (socket) {
      socket.end('handled by child');
    });
  }
});

Note that the server is now shared between the parent and child, this means that some connections will be handled by the parent and some by the child.

注意,server对象现在有父进程和子进程共享,这意味着某些连接将会被父进程和子进程处理。

For dgram servers the workflow is exactly the same. Here you listen on a message event instead of connection and use server.bind instead of server.listen. (Currently only supported on UNIX platforms.)

对‘dgram’服务器,工作流程是一样的, 你监听的是‘message’事件,而不是 ‘connection’事件, 使用‘server.bind’ ,而不是‘server.listen’.(当前仅在UNIX平台支持)

示例: 发送socket对象#

Here is an example of sending a socket. It will spawn two children and handle connections with the remote address 74.125.127.100 as VIP by sending the socket to a "special" child process. Other sockets will go to a "normal" process.

这是个发送socket的例子. 他将创建两个子线程 ,同时处理连接,这是通过使用远程地址 74.125.127.100 作为 VIP 发送socket到一个‘特殊’的子线程. 其他的socket将会发送到‘正常’的线程里.

  // if this is a VIP
  if (socket.remoteAddress === '74.125.127.100') {
    special.send('socket', socket);
    return;
  }
  // just the usual dudes
  normal.send('socket', socket);
});
server.listen(1337);

The child.js could look like this:

child.js 文件代码如下:

process.on('message', function(m, socket) {
  if (m === 'socket') {
    socket.end('You were handled as a ' + process.argv[2] + ' person');
  }
});

Note that once a single socket has been sent to a child the parent can no longer keep track of when the socket is destroyed. To indicate this condition the .connections property becomes null. It is also recommended not to use .maxConnections in this condition.

注意,一旦单个的socket被发送到子进程,当这个socket被删除之后,父进程将不再对它保存跟踪,这表明了这个条件下‘.connetions’属性将变成'null', 在这个条件下同时也不推荐时间‘.maxConnections’

child.disconnect()#

To close the IPC connection between parent and child use the child.disconnect() method. This allows the child to exit gracefully since there is no IPC channel keeping it alive. When calling this method the disconnect event will be emitted in both parent and child, and the connected flag will be set to false. Please note that you can also call process.disconnect() in the child process.

使用child.disconnect() 方法关闭父进程与子进程的IPC连接. 他让子进程非常优雅的退出,因为已经没有活跃的IPC信道. 当调用这个方法,‘disconnect’事件将会同时在父进程和子进程内被触发,‘connected’的标签将会被设置成‘flase’, 请注意,你也可以在子进程中调用‘process.disconnect()’

child_process.spawn(command, [args], [options])#

  • command String The command to run
  • args Array List of string arguments
  • options Object
    • cwd String Current working directory of the child process
    • stdio Array|String Child's stdio configuration. (See below)
    • customFds Array Deprecated File descriptors for the child to use for stdio. (See below)
    • env Object Environment key-value pairs
    • detached Boolean The child will be a process group leader. (See below)
    • uid Number Sets the user identity of the process. (See setuid(2).)
    • gid Number Sets the group identity of the process. (See setgid(2).)
  • return: ChildProcess object
  • command {String}要运行的命令
  • args {Array} 字符串参数列表
  • options {Object}
    • cwd {String} 子进程的当前的工作目录
    • stdio {Array|String} 子进程 stdio 配置. (参阅下文)
    • customFds {Array} Deprecated 作为子进程 stdio 使用的 文件标示符. (参阅下文)
    • env {Object} 环境变量的键值对
    • detached {Boolean} 子进程将会变成一个进程组的领导者. (参阅下文)
    • uid {Number} 设置用户进程的ID. (See setuid(2).)
    • gid {Number} 设置进程组的ID. (See setgid(2).)
  • 返回: {ChildProcess object}

Launches a new process with the given command, with command line arguments in args. If omitted, args defaults to an empty Array.

用给定的命令发布一个子进程,带有‘args’命令行参数,如果省略的话,‘args’默认为一个空数组

The third argument is used to specify additional options, which defaults to:

第三个参数被用来指定额外的设置,默认是:

{ cwd: undefined,
  env: process.env
}

cwd allows you to specify the working directory from which the process is spawned. Use env to specify environment variables that will be visible to the new process.

cwd允许你从被创建的子进程中指定一个工作目录. 使用 env 去指定在新进程中可用的环境变量.

Example of running ls -lh /usr, capturing stdout, stderr, and the exit code:

一个运行 ls -lh /usr的例子, 获取stdout, stderr, 和退出代码:

ls.on('close', function (code) {
  console.log('child process exited with code ' + code);
});

Example: A very elaborate way to run 'ps ax | grep ssh'

例子: 一个非常精巧的方法执行 'ps ax | grep ssh'

grep.on('close', function (code) {
  if (code !== 0) {
    console.log('grep process exited with code ' + code);
  }
});

Example of checking for failed exec:

检查执行错误的例子:

child.stderr.setEncoding('utf8');
child.stderr.on('data', function (data) {
  if (/^execvp\(\)/.test(data)) {
    console.log('Failed to start child process.');
  }
});

Note that if spawn receives an empty options object, it will result in spawning the process with an empty environment rather than using process.env. This due to backwards compatibility issues with a deprecated API.

注意,当在spawn过程中接收一个空对象,这会导致创建的进程使用空的环境变量而不是使用‘process.env’.这是由于与一个废弃API向后兼容的问题.

The 'stdio' option to child_process.spawn() is an array where each index corresponds to a fd in the child. The value is one of the following:

child_process.spawn() 中的 stdio 选项是一个数组,每个索引对应子进程中的一个文件标识符。可以是下列值之一:

  1. 'pipe' - Create a pipe between the child process and the parent process. The parent end of the pipe is exposed to the parent as a property on the child_process object as ChildProcess.stdio[fd]. Pipes created for fds 0 - 2 are also available as ChildProcess.stdin, ChildProcess.stdout and ChildProcess.stderr, respectively.
  2. 'ipc' - Create an IPC channel for passing messages/file descriptors between parent and child. A ChildProcess may have at most one IPC stdio file descriptor. Setting this option enables the ChildProcess.send() method. If the child writes JSON messages to this file descriptor, then this will trigger ChildProcess.on('message'). If the child is a Node.js program, then the presence of an IPC channel will enable process.send() and process.on('message').
  3. 'ignore' - Do not set this file descriptor in the child. Note that Node will always open fd 0 - 2 for the processes it spawns. When any of these is ignored node will open /dev/null and attach it to the child's fd.
  4. Stream object - Share a readable or writable stream that refers to a tty, file, socket, or a pipe with the child process. The stream's underlying file descriptor is duplicated in the child process to the fd that corresponds to the index in the stdio array.
  5. Positive integer - The integer value is interpreted as a file descriptor that is is currently open in the parent process. It is shared with the child process, similar to how Stream objects can be shared.
  6. null, undefined - Use default value. For stdio fds 0, 1 and 2 (in other words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the default is 'ignore'.
  1. 'pipe' -在子进程与父进程之间创建一个管道,管道的父进程端以 child_process 的属性的形式暴露给父进程,如 ChildProcess.stdio[fd]。 为 文件标识(fds) 0 - 2 建立的管道也可以通过 ChildProcess.stdin,ChildProcess.stdout 及 ChildProcess.stderr 分别访问。

  2. 'ipc' - 创建一个IPC通道以在父进程与子进程之间传递 消息/文件标识符。一个子进程只能有最多一个 IPC stdio 文件标识。 设置该选项激活 ChildProcess.send() 方法。如果子进程向此文件标识符写JSON消息,则会触发 ChildProcess.on("message")。 如果子进程是一个nodejs程序,那么IPC通道的存在会激活process.send()和process.on('message')

  3. 'ignore' - 不在子进程中设置该文件标识。注意,Node 总是会为其spawn的进程打开 文件标识(fd) 0 - 2。 当其中任意一项被 ignored,node 会打开 /dev/null 并将其附给子进程的文件标识(fd)。

  4. Stream 对象 - 与子进程共享一个与tty,文件,socket,或者管道(pipe)相关的可读或可写流。 该流底层(underlying)的文件标识在子进程中被复制给stdio数组索引对应的文件标识(fd)

  5. 正数 - 该整形值被解释为父进程中打开的文件标识符。他与子进程共享,和Stream被共享的方式相似。

  6. null, undefined - 使用默认值。 对于stdio fds 0,1,2(或者说stdin,stdout和stderr),pipe管道被建立。对于fd 3及往后,默认为ignore

As a shorthand, the stdio argument may also be one of the following strings, rather than an array:

作为快捷方式,stdio 参数除了数组也可以是下列字符串之一:

  • ignore - ['ignore', 'ignore', 'ignore']
  • pipe - ['pipe', 'pipe', 'pipe']
  • inherit - [process.stdin, process.stdout, process.stderr] or [0,1,2]

  • ignore - ['ignore', 'ignore', 'ignore']

  • pipe - ['pipe', 'pipe', 'pipe']
  • inherit - [process.stdin, process.stdout, process.stderr][0,1,2]

Example:

示例:

// 开启一个额外的 fd=4 来与提供 startd 风格接口的程序进行交互。
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });

If the detached option is set, the child process will be made the leader of a new process group. This makes it possible for the child to continue running after the parent exits.

如果 detached 选项被设置,则子进程会被作为新进程组的 leader。这使得子进程可以在父进程退出后继续运行。

By default, the parent will wait for the detached child to exit. To prevent the parent from waiting for a given child, use the child.unref() method, and the parent's event loop will not include the child in its reference count.

缺省情况下,父进程会等待脱离了的子进程退出。要阻止父进程等待一个给出的子进程 child,使用 child.unref() 方法,则父进程的事件循环引用计数中将不会包含这个子进程。

Example of detaching a long-running process and redirecting its output to a file:

脱离一个长时间运行的进程并将它的输出重定向到一个文件的例子:

 child.unref();

When using the detached option to start a long-running process, the process will not stay running in the background unless it is provided with a stdio configuration that is not connected to the parent. If the parent's stdio is inherited, the child will remain attached to the controlling terminal.

当使用 detached 选项来启动一个长时间运行的进程,该进程不会在后台保持运行,除非向它提供了一个不连接到父进程的 stdio 配置。如果继承了父进程的 stdio,则子进程会继续附着在控制终端。

There is a deprecated option called customFds which allows one to specify specific file descriptors for the stdio of the child process. This API was not portable to all platforms and therefore removed. With customFds it was possible to hook up the new process' [stdin, stdout, stderr] to existing streams; -1 meant that a new stream should be created. Use at your own risk.

有一个已废弃的选项 customFds 允许指定特定文件描述符作为子进程的 stdio。该 API 无法移植到所有平台,因此被移除。使用 customFds 可以将新进程的 [stdin, stdout, stderr] 钩到已有流上;-1 表示创建新流。自己承担使用风险。

See also: child_process.exec() and child_process.fork()

参阅:child_process.exec()child_process.fork()

child_process.exec(command, [options], callback)#

  • command String The command to run, with space-separated arguments
  • options Object
    • cwd String Current working directory of the child process
    • env Object Environment key-value pairs
    • encoding String (Default: 'utf8')
    • shell String Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the -c switch on UNIX or /s /c on Windows. On Windows, command line parsing should be compatible with cmd.exe.)
    • timeout Number (Default: 0)
    • maxBuffer Number (Default: 200*1024)
    • killSignal String (Default: 'SIGTERM')
  • callback Function called with the output when process terminates
    • error Error
    • stdout Buffer
    • stderr Buffer
  • Return: ChildProcess object

  • command String 将要执行的命令,用空格分隔参数

  • options Object
    • cwd String 子进程的当前工作目录
    • env Object 环境变量键值对
    • encoding String 编码(缺省为 'utf8')
    • shell String 运行命令的 shell(UNIX 上缺省为 '/bin/sh',Windows 上缺省为 'cmd.exe'。该 shell 在 UNIX 上应当接受 -c 开关,在 Windows 上应当接受 /s /c 开关。在 Windows 中,命令行解析应当兼容 cmd.exe。)
    • timeout Number 超时(缺省为 0)
    • maxBuffer Number 最大缓冲(缺省为 200*1024)
    • killSignal String 结束信号(缺省为 'SIGTERM')
  • callback Function 进程结束时回调并带上输出
    • error Error
    • stdout Buffer
    • stderr Buffer
  • 返回:ChildProcess 对象

Runs a command in a shell and buffers the output.

在 shell 中执行一个命令并缓冲输出。

child = exec('cat *.js bad_file | wc -l',
  function (error, stdout, stderr) {
    console.log('stdout: ' + stdout);
    console.log('stderr: ' + stderr);
    if (error !== null) {
      console.log('exec error: ' + error);
    }
});

The callback gets the arguments (error, stdout, stderr). On success, error will be null. On error, error will be an instance of Error and err.code will be the exit code of the child process, and err.signal will be set to the signal that terminated the process.

回调参数为 (error, stdout, stderr)。当成功时,error 会是 null。当遇到错误时,error 会是一个 Error 实例,并且 err.code 会是子进程的退出代码,同时 err.signal 会被设置为结束进程的信号名。

There is a second optional argument to specify several options. The default options are

第二个可选的参数用于指定一些选项,缺省选项为:

{ encoding: 'utf8',
  timeout: 0,
  maxBuffer: 200*1024,
  killSignal: 'SIGTERM',
  cwd: null,
  env: null }

If timeout is greater than 0, then it will kill the child process if it runs longer than timeout milliseconds. The child process is killed with killSignal (default: 'SIGTERM'). maxBuffer specifies the largest amount of data allowed on stdout or stderr - if this value is exceeded then the child process is killed.

如果 timeout 大于 0,则当进程运行超过 timeout 毫秒后会被终止。子进程使用 killSignal 信号结束(缺省为 'SIGTERM')。maxBuffer 指定了 stdout 或 stderr 所允许的最大数据量,如果超出这个值则子进程会被终止。

child_process.execFile(file, args, options, callback)#

  • file String The filename of the program to run
  • args Array List of string arguments
  • options Object
    • cwd String Current working directory of the child process
    • env Object Environment key-value pairs
    • encoding String (Default: 'utf8')
    • timeout Number (Default: 0)
    • maxBuffer Number (Default: 200*1024)
    • killSignal String (Default: 'SIGTERM')
  • callback Function called with the output when process terminates
    • error Error
    • stdout Buffer
    • stderr Buffer
  • Return: ChildProcess object

  • file String 要运行的程序的文件名

  • args Array 字符串参数列表
  • options Object
    • cwd String 子进程的当前工作目录
    • env Object 环境变量键值对
    • encoding String 编码(缺省为 'utf8')
    • timeout Number 超时(缺省为 0)
    • maxBuffer Number 最大缓冲(缺省为 200*1024)
    • killSignal String 结束信号(缺省为 'SIGTERM')
  • callback Function 进程结束时回调并带上输出
    • error Error
    • stdout Buffer
    • stderr Buffer
  • 返回:ChildProcess 对象

This is similar to child_process.exec() except it does not execute a subshell but rather the specified file directly. This makes it slightly leaner than child_process.exec. It has the same options.

该方法类似于 child_process.exec(),但是它不会执行一个子 shell,而是直接执行所指定的文件。因此它稍微比 child_process.exec 精简,参数与之一致。

child_process.fork(modulePath, [args], [options])#

  • modulePath String The module to run in the child
  • args Array List of string arguments
  • options Object
    • cwd String Current working directory of the child process
    • env Object Environment key-value pairs
    • encoding String (Default: 'utf8')
    • execPath String Executable used to create the child process
  • Return: ChildProcess object

  • modulePath String 子进程中运行的模块

  • args Array 字符串参数列表
  • options Object
    • cwd String 子进程的当前工作目录
    • env Object 环境变量键值对
    • encoding String 编码(缺省为 'utf8')
    • execPath String 创建子进程的可执行文件
  • 返回:ChildProcess 对象

This is a special case of the spawn() functionality for spawning Node processes. In addition to having all the methods in a normal ChildProcess instance, the returned object has a communication channel built-in. See child.send(message, [sendHandle]) for details.

该方法是 spawn() 的特殊情景,用于派生 Node 进程。除了普通 ChildProcess 实例所具有的所有方法,所返回的对象还具有内建的通讯通道。详见 child.send(message, [sendHandle])

By default the spawned Node process will have the stdout, stderr associated with the parent's. To change this behavior set the silent property in the options object to true.

缺省情况下所派生的 Node 进程的 stdout、stderr 会关联到父进程。要更改该行为,可将 options 对象中的 silent 属性设置为 true

The child process does not automatically exit once it's done, you need to call process.exit() explicitly. This limitation may be lifted in the future.

子进程运行完成时并不会自动退出,您需要明确地调用 process.exit()。该限制可能会在未来版本里接触。

These child Nodes are still whole new instances of V8. Assume at least 30ms startup and 10mb memory for each new Node. That is, you cannot create many thousands of them.

这些子 Node 是全新的 V8 实例,假设每个新的 Node 需要至少 30 毫秒的启动时间和 10MB 内存,就是说您不能创建成百上千个这样的实例。

The execPath property in the options object allows for a process to be created for the child rather than the current node executable. This should be done with care and by default will talk over the fd represented an environmental variable NODE_CHANNEL_FD on the child process. The input and output on this fd is expected to be line delimited JSON objects.

options 对象中的 execPath 属性可以用非当前 node 可执行文件来创建子进程。这需要小心使用,并且缺省情况下会使用子进程上的 NODE_CHANNEL_FD 环境变量所指定的文件描述符来通讯。该文件描述符的输入和输出假定为以行分割的 JSON 对象。

断言 (assert)#

稳定度: 5 - 已锁定

This module is used for writing unit tests for your applications, you can access it with require('assert').

此模块主要用于对您的程序进行单元测试,要使用它,请 require('assert')

assert.fail(actual, expected, message, operator)#

Throws an exception that displays the values for actual and expected separated by the provided operator.

抛出异常:显示为被 operator (分隔符)所分隔的 actual (实际值)和 expected (期望值)。

assert(value, message), assert.ok(value, [message])#

Tests if value is truthy, it is equivalent to assert.equal(true, !!value, message);

测试结果是否为真(true),相当于 assert.equal(true, !!value, message);

assert.equal(actual, expected, [message])#

Tests shallow, coercive equality with the equal comparison operator ( == ).

浅测试, 强制相等就像使用相等操作符( == ).

assert.notEqual(actual, expected, [message])#

Tests shallow, coercive non-equality with the not equal comparison operator ( != ).

Tests shallow, coercive non-equality with the not equal comparison operator ( != ).

assert.deepEqual(actual, expected, [message])#

Tests for deep equality.

用于深度匹配测试。

assert.notDeepEqual(actual, expected, [message])#

Tests for any deep inequality.

用于深度非匹配测试。

assert.strictEqual(actual, expected, [message])#

Tests strict equality, as determined by the strict equality operator ( === )

用于严格相等匹配测试,由(===)的结果决定

assert.notStrictEqual(actual, expected, [message])#

Tests strict non-equality, as determined by the strict not equal operator ( !== )

严格不相等测试, 强制不相等就像使用严格不相等操作符( !== ).

assert.throws(block, [error], [message])#

Expects block to throw an error. error can be constructor, regexp or validation function.

输出一个错误的 blockerror 可以是构造函数,正则或者验证函数

Validate instanceof using constructor:

使用验证实例的构造函数

assert.throws(
  function() {
    throw new Error("错误值");
  },
  Error
);

Validate error message using RegExp:

用正则表达式验证错误消息。

assert.throws(
  function() {
    throw new Error("错误值");
  },
  /value/
);

Custom error validation:

自定义错误校验:

assert.throws(
  function() {
    throw new Error("Wrong value");
  },
  function(err) {
    if ( (err instanceof Error) && /value/.test(err) ) {
      return true;
    }
  },
  "unexpected error"
);

assert.doesNotThrow(block, [message])#

Expects block not to throw an error, see assert.throws for details.

预期 block 不会抛出错误,详见 assert.throws。

assert.ifError(value)#

Tests if value is not a false value, throws if it is a true value. Useful when testing the first argument, error in callbacks.

测试值是否不为 false,当为 true 时抛出。常用于回调中第一个 error 参数的检查。

TTY#

稳定度: 2 - 不稳定

The tty module houses the tty.ReadStream and tty.WriteStream classes. In most cases, you will not need to use this module directly.

tty 模块提供了 tty.ReadStreamtty.WriteStream 类。在大部分情况下,您都不会需要直接使用此模块。

When node detects that it is being run inside a TTY context, then process.stdin will be a tty.ReadStream instance and process.stdout will be a tty.WriteStream instance. The preferred way to check if node is being run in a TTY context is to check process.stdout.isTTY:

当 node 检测到它正运行于 TTY 上下文中时,process.stdin 将会是一个 tty.ReadStream 实例,且 process.stdout 也将会是一个 tty.WriteStream 实例。检查 node 是否运行于 TTY 上下文的首选方式是检查 process.stdout.isTTY

$ node -p -e "Boolean(process.stdout.isTTY)"
true
$ node -p -e "Boolean(process.stdout.isTTY)" | cat
false

tty.isatty(fd)#

Returns true or false depending on if the fd is associated with a terminal.

fd 关联于中端则返回 true,反之返回 false

tty.setRawMode(mode)#

Deprecated. Use tty.ReadStream#setRawMode() (i.e. process.stdin.setRawMode()) instead.

已废弃,请使用 tty.ReadStream#setRawMode()(如 process.stdin.setRawMode())。

类: ReadStream#

A net.Socket subclass that represents the readable portion of a tty. In normal circumstances, process.stdin will be the only tty.ReadStream instance in any node program (only when isatty(0) is true).

一个 net.Socket 子类,代表 TTY 的可读部分。通常情况下在所有 node 程序中 process.stdin 会是仅有的 tty.ReadStream 实例(进当 isatty(0) 为 true 时)。

rs.isRaw#

A Boolean that is initialized to false. It represents the current "raw" state of the tty.ReadStream instance.

一个 Boolean,初始为 false,代表 tty.ReadStream 实例的当前 "raw" 状态。

rs.setRawMode(mode)#

mode should be true or false. This sets the properties of the tty.ReadStream to act either as a raw device or default. isRaw will be set to the resulting mode.

mode 可以是 truefalse。它设定 tty.ReadStream 的属性表现为原始设备或缺省。isRaw 会被设置为结果模式。

类: WriteStream#

A net.Socket subclass that represents the writable portion of a tty. In normal circumstances, process.stdout will be the only tty.WriteStream instance ever created (and only when isatty(1) is true).

一个 net.Socket 子类,代表 TTY 的可写部分。通常情况下 process.stdout 会是仅有的 tty.WriteStream 实例(进当 isatty(1) 为 true 时)。

ws.columns#

A Number that gives the number of columns the TTY currently has. This property gets updated on "resize" events.

一个 `Number,表示 TTY 当前的列数。该属性会在 "resize" 事件中被更新。

ws.rows#

A Number that gives the number of rows the TTY currently has. This property gets updated on "resize" events.

一个 `Number,表示 TTY 当前的行数。该属性会在 "resize" 事件中被更新。

事件: 'resize'#

function () {}

function () {}

Emitted by refreshSize() when either of the columns or rows properties has changed.

refreshSize()columnsrows 属性被改变时触发。

process.stdout.on('resize', function() {
  console.log('screen size has changed!');
  console.log(process.stdout.columns + 'x' + process.stdout.rows);
});


process.stdout.on('resize', function() {
  console.log('屏幕大小已改变!');
  console.log(process.stdout.columns + 'x' + process.stdout.rows);
});

Zlib#

稳定度: 3 - 稳定

You can access this module with:

你可以这样引入此模块:

var zlib = require('zlib');

This provides bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw classes. Each class takes the same options, and is a readable/writable Stream.

这个模块提供了对Gzip/Gunzip, Deflate/Inflate和DeflateRaw/InflateRaw类的绑定。每一个类都可以接收相同的选项,并且本身也是一个可读写的Stream类。

例子#

Compressing or decompressing a file can be done by piping an fs.ReadStream into a zlib stream, then into an fs.WriteStream.

压缩或解压缩一个文件可以通过导流一个 fs.ReadStream 到一个 zlib 流,然后到一个 fs.WriteStream 来完成。

inp.pipe(gzip).pipe(out);

Compressing or decompressing data in one step can be done by using the convenience methods.

一步压缩或解压缩数据可以通过快捷方法来完成。

var buffer = new Buffer('eJzT0yMAAGTvBe8=', 'base64');
zlib.unzip(buffer, function(err, buffer) {
  if (!err) {
    console.log(buffer.toString());
  }
});

To use this module in an HTTP client or server, use the accept-encoding on requests, and the content-encoding header on responses.

要在 HTTP 客户端或服务器中使用此模块,请在请求和响应中使用 accept-encodingcontent-encoding 头。

Note: these examples are drastically simplified to show the basic concept. Zlib encoding can be expensive, and the results ought to be cached. See Memory Usage Tuning below for more information on the speed/memory/compression tradeoffs involved in zlib usage.

注意:这些例子只是极其简单地展示了基础的概念 Zlib 编码消耗非常大,结果需要缓存.看下面的内存调优 中更多的关于Zlib用法中 速度/内存/压缩 的权衡取舍。

  // 注意: 这不是一个不合格的 accept-encoding 解析器
  // 详见 http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3
  if (acceptEncoding.match(/\bdeflate\b/)) {
    response.writeHead(200, { 'content-encoding': 'deflate' });
    raw.pipe(zlib.createDeflate()).pipe(response);
  } else if (acceptEncoding.match(/\bgzip\b/)) {
    response.writeHead(200, { 'content-encoding': 'gzip' });
    raw.pipe(zlib.createGzip()).pipe(response);
  } else {
    response.writeHead(200, {});
    raw.pipe(response);
  }
}).listen(1337);

zlib.createGzip([options])#

Returns a new Gzip object with an options.

options 所给选项返回一个新的 Gzip 对象。

zlib.createGunzip([options])#

Returns a new Gunzip object with an options.

options 所给选项返回一个新的 Gunzip 对象。

zlib.createDeflate([options])#

Returns a new Deflate object with an options.

options 所给选项返回一个新的 Deflate 对象。

zlib.createInflate([options])#

Returns a new Inflate object with an options.

options 所给选项返回一个新的 Inflate 对象。

zlib.createDeflateRaw([options])#

Returns a new DeflateRaw object with an options.

options 所给选项返回一个新的 DeflateRaw 对象。

zlib.createInflateRaw([options])#

Returns a new InflateRaw object with an options.

options 所给选项返回一个新的 InflateRaw 对象。

zlib.createUnzip([options])#

Returns a new Unzip object with an options.

options 所给选项返回一个新的 Unzip 对象。

类: zlib.Zlib#

Not exported by the zlib module. It is documented here because it is the base class of the compressor/decompressor classes.

这个类未被 zlib 模块导出,编入此文档是因为它是其它压缩器/解压缩器的基类。

zlib.flush([kind], callback)#

kind defaults to zlib.Z_FULL_FLUSH.

kind 缺省为 zlib.Z_FULL_FLUSH

Flush pending data. Don't call this frivolously, premature flushes negatively impact the effectiveness of the compression algorithm.

写入缓冲数据。请勿轻易调用此方法,过早的写入会对压缩算法的作用产生影响。

zlib.params(level, strategy, callback)#

Dynamically update the compression level and compression strategy. Only applicable to deflate algorithm.

动态更新压缩级别和压缩策略。仅对 deflate 算法有效。

zlib.reset()#

Reset the compressor/decompressor to factory defaults. Only applicable to the inflate and deflate algorithms.

将压缩器/解压缩器重置为缺省值。仅对 inflate 和 deflate 算法有效。

类: zlib.Gzip#

Compress data using gzip.

使用 gzip 压缩数据。

类: zlib.Gunzip#

Decompress a gzip stream.

解压缩一个 gzip 流。

类: zlib.Deflate#

Compress data using deflate.

使用 deflate 压缩数据。

类: zlib.Inflate#

Decompress a deflate stream.

解压缩一个 deflate 流。

类: zlib.DeflateRaw#

Compress data using deflate, and do not append a zlib header.

使用 deflate 压缩数据,并且不附带 zlib 头。

类: zlib.InflateRaw#

Decompress a raw deflate stream.

解压缩一个原始 deflate 流。

类: zlib.Unzip#

Decompress either a Gzip- or Deflate-compressed stream by auto-detecting the header.

自动识别头部来解压缩一个以 gzip 或 deflate 压缩的流。

快捷方法#

All of these take a string or buffer as the first argument, an optional second argument to supply options to the zlib classes and will call the supplied callback with callback(error, result).

所有这些方法的第一个参数都可以是字符串或 Buffer;可选地可以将传给 zlib 类的选项作为第二个参数传入;回调格式为 callback(error, result)

zlib.deflate(buf, [options], callback)#

Compress a string with Deflate.

使用 Deflate 压缩一个字符串。

zlib.deflateRaw(buf, [options], callback)#

Compress a string with DeflateRaw.

使用 DeflateRaw 压缩一个字符串。

zlib.Gzip(buf, [options], callback)#

Compress a string with Gzip.

使用 Gzip 压缩一个字符串。

zlib.gunzip(buf, [options], callback)#

Decompress a raw Buffer with Gunzip.

使用 Gunzip 解压缩一个原始的 Buffer。

zlib.inflate(buf, [options], callback)#

Decompress a raw Buffer with Inflate.

使用 Inflate 解压缩一个原始的 Buffer。

zlib.inflateRaw(buf, [options], callback)#

Decompress a raw Buffer with InflateRaw.

使用 InflateRaw 解压缩一个原始的 Buffer。

zlib.unzip(buf, [options], callback)#

Decompress a raw Buffer with Unzip.

使用 Unzip 解压缩一个原始的 Buffer。

选项#

Each class takes an options object. All options are optional.

各个类都有一个选项对象。所有选项都是可选的。

Note that some options are only relevant when compressing, and are ignored by the decompression classes.

请注意有些选项仅对压缩有效,并会被解压缩类所忽略。

  • flush (default: zlib.Z_NO_FLUSH)
  • chunkSize (default: 16*1024)
  • windowBits
  • level (compression only)
  • memLevel (compression only)
  • strategy (compression only)
  • dictionary (deflate/inflate only, empty dictionary by default)

  • flush(缺省:zlib.Z_NO_FLUSH

  • chunkSize(缺省:16*1024)
  • windowBits
  • level(仅用于压缩)
  • memLevel(仅用于压缩)
  • strategy(仅用于压缩)
  • dictionary(仅用于 deflate/inflate,缺省为空目录)

See the description of deflateInit2 and inflateInit2 at

http://zlib.net/manual.html#Advanced for more information on these.

详情请参阅 http://zlib.net/manual.html#AdvanceddeflateInit2inflateInit2

内存使用调优#

From zlib/zconf.h, modified to node's usage:

来自 zlib/zconf.h,修改为 node 的用法:

The memory requirements for deflate are (in bytes):

deflate 的内存需求(按字节):

(1 << (windowBits+2)) +  (1 << (memLevel+9))

that is: 128K for windowBits=15 + 128K for memLevel = 8 (default values) plus a few kilobytes for small objects.

表示:windowBits = 15 的 128K + memLevel = 8 的 128K(缺省值)加上其它对象的若干 KB。

For example, if you want to reduce the default memory requirements from 256K to 128K, set the options to:

举个例子,如果您需要将缺省内存需求从 256K 减少到 128K,设置选项:

{ windowBits: 14, memLevel: 7 }

Of course this will generally degrade compression (there's no free lunch).

当然这通常会降低压缩等级(天底下没有免费午餐)。

The memory requirements for inflate are (in bytes)

inflate 的内存需求(按字节):

1 << windowBits

that is, 32K for windowBits=15 (default value) plus a few kilobytes for small objects.

表示 windowBits = 15(缺省值)的 32K 加上其它对象的若干 KB。

This is in addition to a single internal output slab buffer of size chunkSize, which defaults to 16K.

这是除了内部输出缓冲外 chunkSize 的大小,缺省为 16K。

The speed of zlib compression is affected most dramatically by the level setting. A higher level will result in better compression, but will take longer to complete. A lower level will result in less compression, but will be much faster.

zlib 压缩的速度主要受压缩级别 level 的影响。更高的压缩级别会有更好的压缩率,但也要花费更长时间。更低的压缩级别会有较低压缩率,但速度更快。

In general, greater memory usage options will mean that node has to make fewer calls to zlib, since it'll be able to process more data in a single write operation. So, this is another factor that affects the speed, at the cost of memory usage.

通常,使用更多内存的选项意味着 node 能减少对 zlib 的调用,因为单次 write操作能处理更多数据。因此,这是另一个影响速度和内存占用的因素。

常量#

All of the constants defined in zlib.h are also defined on require('zlib'). In the normal course of operations, you will not need to ever set any of these. They are documented here so that their presence is not surprising. This section is taken almost directly from the zlib documentation. See

http://zlib.net/manual.html#Constants for more details.

所有在 zlib.h 中定义的常量同样也定义在 require('zlib') 中。 在通常情况下您几乎不会用到它们,编入文档只是为了让您不会对它们的存在感到惊讶。该章节几乎完全来自 zlib 的文档。详见 http://zlib.net/manual.html#Constants

Allowed flush values.

允许的 flush 取值。

  • zlib.Z_NO_FLUSH
  • zlib.Z_PARTIAL_FLUSH
  • zlib.Z_SYNC_FLUSH
  • zlib.Z_FULL_FLUSH
  • zlib.Z_FINISH
  • zlib.Z_BLOCK
  • zlib.Z_TREES

  • zlib.Z_NO_FLUSH

  • zlib.Z_PARTIAL_FLUSH
  • zlib.Z_SYNC_FLUSH
  • zlib.Z_FULL_FLUSH
  • zlib.Z_FINISH
  • zlib.Z_BLOCK
  • zlib.Z_TREES

Return codes for the compression/decompression functions. Negative values are errors, positive values are used for special but normal events.

压缩/解压缩函数的返回值。负数代表错误,正数代表特殊但正常的事件。

  • zlib.Z_OK
  • zlib.Z_STREAM_END
  • zlib.Z_NEED_DICT
  • zlib.Z_ERRNO
  • zlib.Z_STREAM_ERROR
  • zlib.Z_DATA_ERROR
  • zlib.Z_MEM_ERROR
  • zlib.Z_BUF_ERROR
  • zlib.Z_VERSION_ERROR

  • zlib.Z_OK

  • zlib.Z_STREAM_END
  • zlib.Z_NEED_DICT
  • zlib.Z_ERRNO
  • zlib.Z_STREAM_ERROR
  • zlib.Z_DATA_ERROR
  • zlib.Z_MEM_ERROR
  • zlib.Z_BUF_ERROR
  • zlib.Z_VERSION_ERROR

Compression levels.

压缩级别。

  • zlib.Z_NO_COMPRESSION
  • zlib.Z_BEST_SPEED
  • zlib.Z_BEST_COMPRESSION
  • zlib.Z_DEFAULT_COMPRESSION

  • zlib.Z_NO_COMPRESSION

  • zlib.Z_BEST_SPEED
  • zlib.Z_BEST_COMPRESSION
  • zlib.Z_DEFAULT_COMPRESSION

Compression strategy.

压缩策略。

  • zlib.Z_FILTERED
  • zlib.Z_HUFFMAN_ONLY
  • zlib.Z_RLE
  • zlib.Z_FIXED
  • zlib.Z_DEFAULT_STRATEGY

  • zlib.Z_FILTERED

  • zlib.Z_HUFFMAN_ONLY
  • zlib.Z_RLE
  • zlib.Z_FIXED
  • zlib.Z_DEFAULT_STRATEGY

Possible values of the data_type field.

data_type 字段的可能值。

  • zlib.Z_BINARY
  • zlib.Z_TEXT
  • zlib.Z_ASCII
  • zlib.Z_UNKNOWN

  • zlib.Z_BINARY

  • zlib.Z_TEXT
  • zlib.Z_ASCII
  • zlib.Z_UNKNOWN

The deflate compression method (the only one supported in this version).

deflate 压缩方法(该版本仅支持一种)。

  • zlib.Z_DEFLATED

  • zlib.Z_DEFLATED

For initializing zalloc, zfree, opaque.

初始化 zalloc/zfree/opaque。

  • zlib.Z_NULL
  • zlib.Z_NULL

操作系统#

稳定度: 4 - 冻结

Provides a few basic operating-system related utility functions.

提供一些基本的操作系统相关函数。

Use require('os') to access this module.

使用 require('os') 来调用这个模块。

os.tmpdir()#

Returns the operating system's default directory for temp files.

返回操作系统默认的临时文件目录

os.endianness()#

Returns the endianness of the CPU. Possible values are "BE" or "LE".

返回 CPU 的字节序,可能的是 "BE""LE"

os.hostname()#

Returns the hostname of the operating system.

返回操作系统的主机名。

os.type()#

Returns the operating system name.

返回操作系统名称。

os.platform()#

Returns the operating system platform.

返回操作系统平台

os.arch()#

Returns the operating system CPU architecture. Possible values are "x64", "arm" and "ia32".

返回操作系统 CPU 架构,可能的值有 "x64""arm""ia32"

os.release()#

Returns the operating system release.

返回操作系统的发行版本。

os.uptime()#

Returns the system uptime in seconds.

返回操作系统运行的时间,以秒为单位。

os.loadavg()#

Returns an array containing the 1, 5, and 15 minute load averages.

返回一个包含 1、5、15 分钟平均负载的数组。

os.totalmem()#

Returns the total amount of system memory in bytes.

返回系统内存总量,单位为字节。

os.freemem()#

Returns the amount of free system memory in bytes.

返回操作系统空闲内存量,单位是字节。

os.cpus()#

Returns an array of objects containing information about each CPU/core installed: model, speed (in MHz), and times (an object containing the number of milliseconds the CPU/core spent in: user, nice, sys, idle, and irq).

返回一个对象数组,包含所安装的每个 CPU/内核的信息:型号、速度(单位 MHz)、时间(一个包含 user、nice、sys、idle 和 irq 所使用 CPU/内核毫秒数的对象)。

Example inspection of os.cpus:

os.cpus 的示例:

[ { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 252020,
       nice: 0,
       sys: 30340,
       idle: 1070356870,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 306960,
       nice: 0,
       sys: 26980,
       idle: 1071569080,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 248450,
       nice: 0,
       sys: 21750,
       idle: 1070919370,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 256880,
       nice: 0,
       sys: 19430,
       idle: 1070905480,
       irq: 20 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 511580,
       nice: 20,
       sys: 40900,
       idle: 1070842510,
       irq: 0 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 291660,
       nice: 0,
       sys: 34360,
       idle: 1070888000,
       irq: 10 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 308260,
       nice: 0,
       sys: 55410,
       idle: 1071129970,
       irq: 880 } },
  { model: 'Intel(R) Core(TM) i7 CPU         860  @ 2.80GHz',
    speed: 2926,
    times:
     { user: 266450,
       nice: 1480,
       sys: 34920,
       idle: 1072572010,
       irq: 30 } } ]

os.networkInterfaces()#

Get a list of network interfaces:

获取网络接口的一个列表信息:

{ lo:
   [ { address: '127.0.0.1',
       netmask: '255.0.0.0',
       family: 'IPv4',
       mac: '00:00:00:00:00:00',
       internal: true },
     { address: '::1',
       netmask: 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',
       family: 'IPv6',
       mac: '00:00:00:00:00:00',
       internal: true } ],
  eth0:
   [ { address: '192.168.1.108',
       netmask: '255.255.255.0',
       family: 'IPv4',
       mac: '01:02:03:0a:0b:0c',
       internal: false },
     { address: 'fe80::a00:27ff:fe4e:66a1',
       netmask: 'ffff:ffff:ffff:ffff::',
       family: 'IPv6',
       mac: '01:02:03:0a:0b:0c',
       internal: false } ] }

os.EOL#

A constant defining the appropriate End-of-line marker for the operating system.

一个定义了操作系统的一行结束的标识的常量。

调试器#

稳定度: 3 - 稳定

V8 comes with an extensive debugger which is accessible out-of-process via a simple TCP protocol. Node has a built-in client for this debugger. To use this, start Node with the debug argument; a prompt will appear:

V8 提供了一个强大的调试器,可以通过 TCP 协议从外部访问。Node 内建了这个调试器的客户端。要使用调试器,以 debug 参数启动 Node,出现提示符:

% node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
  1 x = 5;
  2 setTimeout(function () {
  3   debugger;
debug>

Node's debugger client doesn't support the full range of commands, but simple step and inspection is possible. By putting the statement debugger; into the source code of your script, you will enable a breakpoint.

Node 的调试器客户端并未完整支持所有命令,但简单的步进和检查是可行的。通过脚本的源代码中放置 debugger; 语句,您便可启用一个断点。

For example, suppose myscript.js looked like this:

比如,假设有一个类似这样的 myscript.js

// myscript.js
x = 5;
setTimeout(function () {
  debugger;
  console.log("world");
}, 1000);
console.log("hello");

Then once the debugger is run, it will break on line 4.

那么,当调试器运行时,它会在第 4 行中断:

% node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
  1 x = 5;
  2 setTimeout(function () {
  3   debugger;
debug> cont
< hello
break in /home/indutny/Code/git/indutny/myscript.js:3
  1 x = 5;
  2 setTimeout(function () {
  3   debugger;
  4   console.log("world");
  5 }, 1000);
debug> next
break in /home/indutny/Code/git/indutny/myscript.js:4
  2 setTimeout(function () {
  3   debugger;
  4   console.log("world");
  5 }, 1000);
  6 console.log("hello");
debug> repl
Press Ctrl + C to leave debug repl
> x
5
> 2+2
4
debug> next
< world
break in /home/indutny/Code/git/indutny/myscript.js:5
  3   debugger;
  4   console.log("world");
  5 }, 1000);
  6 console.log("hello");
  7
debug> quit
%

The repl command allows you to evaluate code remotely. The next command steps over to the next line. There are a few other commands available and more to come. Type help to see others.

repl 命令允许您远程执行代码;next 命令步进到下一行。此外还有一些其它命令,输入 help 查看。

监视器#

You can watch expression and variable values while debugging your code. On every breakpoint each expression from the watchers list will be evaluated in the current context and displayed just before the breakpoint's source code listing.

调试代码时您可以监视表达式或变量值。在每个断点中监视器列表中的各个表达式会被以当前上下文执行,并在断点的源代码前显示。

To start watching an expression, type watch("my_expression"). watchers prints the active watchers. To remove a watcher, type unwatch("my_expression").

输入 watch("my_expression") 开始监视一个表达式;watchers 显示活动监视器;unwatch("my_expression") 移除一个监视器。

命令参考#

步进#

  • cont, c - Continue execution
  • next, n - Step next
  • step, s - Step in
  • out, o - Step out
  • pause - Pause running code (like pause button in Developer Tools)

  • cont, c - 继续执行

  • next, n - Step next
  • step, s - Step in
  • out, o - Step out
  • pause - 暂停执行代码(类似开发者工具中的暂停按钮)

断点#

  • setBreakpoint(), sb() - Set breakpoint on current line
  • setBreakpoint(line), sb(line) - Set breakpoint on specific line
  • setBreakpoint('fn()'), sb(...) - Set breakpoint on a first statement in functions body
  • setBreakpoint('script.js', 1), sb(...) - Set breakpoint on first line of script.js
  • clearBreakpoint, cb(...) - Clear breakpoint

  • setBreakpoint(), sb() - 在当前行设置断点

  • setBreakpoint(line), sb(line) - 在指定行设置断点
  • setBreakpoint('fn()'), sb(...) - 在函数体的第一条语句设置断点
  • setBreakpoint('script.js', 1), sb(...) - 在 script.js 的第一行设置断点
  • clearBreakpoint, cb(...) - 清除断点

It is also possible to set a breakpoint in a file (module) that isn't loaded yet:

在一个尚未被加载的文件(模块)中设置断点也是可行的:

% ./node debug test/fixtures/break-in-module/main.js
< debugger listening on port 5858
connecting to port 5858... ok
break in test/fixtures/break-in-module/main.js:1
  1 var mod = require('./mod.js');
  2 mod.hello();
  3 mod.hello();
debug> setBreakpoint('mod.js', 23)
Warning: script 'mod.js' was not loaded yet.
  1 var mod = require('./mod.js');
  2 mod.hello();
  3 mod.hello();
debug> c
break in test/fixtures/break-in-module/mod.js:23
 21
 22 exports.hello = function() {
 23   return 'hello from module';
 24 };
 25
debug>

信息#

  • backtrace, bt - Print backtrace of current execution frame
  • list(5) - List scripts source code with 5 line context (5 lines before and after)
  • watch(expr) - Add expression to watch list
  • unwatch(expr) - Remove expression from watch list
  • watchers - List all watchers and their values (automatically listed on each breakpoint)
  • repl - Open debugger's repl for evaluation in debugging script's context

  • backtrace, bt - 显示当前执行框架的回溯

  • list(5) - 显示脚本源代码的 5 行上下文(之前 5 行和之后 5 行)
  • watch(expr) - 向监视列表添加表达式
  • unwatch(expr) - 从监视列表移除表达式
  • watchers - 列出所有监视器和它们的值(每个断点会自动列出)
  • repl - 在所调试的脚本的上下文中打开调试器的 repl 执行代码

执行控制#

  • run - Run script (automatically runs on debugger's start)
  • restart - Restart script
  • kill - Kill script

  • run - 运行脚本(调试器开始时自动运行)

  • restart - 重新运行脚本
  • kill - 终止脚本

杂项#

  • scripts - List all loaded scripts
  • version - Display v8's version

  • scripts - 列出所有已加载的脚本

  • version - 显示 V8 的版本

高级使用#

The V8 debugger can be enabled and accessed either by starting Node with the --debug command-line flag or by signaling an existing Node process with SIGUSR1.

V8 调试器可以从两种方式启用和访问:以 --debug 命令行标志启动 Node;或者向已存在的 Node 进程发送 SIGUSR1 信号。

Once a process has been set in debug mode with this it can be connected to with the node debugger. Either connect to the pid or the URI to the debugger. The syntax is:

一旦一个进程进入了调试模式,它便可被 Node 调试器连接。调试器可以通过 pid 或 URI 来连接,语法是:

  • node debug -p <pid> - Connects to the process via the pid
  • node debug <URI> - Connects to the process via the URI such as localhost:5858
  • node debug -p <pid> - 通过 pid 连接进程
  • node debug <URI> - 通过类似 localhost:5858 的 URI 连接进程

集群#

稳定度: 1 - 实验性

A single instance of Node runs in a single thread. To take advantage of multi-core systems the user will sometimes want to launch a cluster of Node processes to handle the load.

单个 Node 实例运行在单个线程中。要发挥多核系统的能力,用户有时候需要启动一个 Node 进程集群来处理负载。

The cluster module allows you to easily create a network of processes that all share server ports.

集群模块允许你方便地创建一个共享服务器端口的进程网络。

  cluster.on('exit', function(worker, code, signal) {
    console.log('工作进程 ' + worker.process.pid + ' 被终止');
  });
} else {
  // 工作进程可以共享任意 TCP 连接
  // 本例中为 HTTP 服务器
  http.createServer(function(req, res) {
    res.writeHead(200);
    res.end("你好,操蛋的世界\n");
  }).listen(8000);
}

Running node will now share port 8000 between the workers:

现在,运行 node 将会在所有工作进程间共享 8000 端口:

% NODE_DEBUG=cluster node server.js
23521,Master Worker 23524 online
23521,Master Worker 23526 online
23521,Master Worker 23523 online
23521,Master Worker 23528 online

This feature was introduced recently, and may change in future versions. Please try it out and provide feedback.

这是一个近期推出的功能,在未来版本中可能会有所改变。请尝试并提供反馈。

Also note that, on Windows, it is not yet possible to set up a named pipe server in a worker.

还要注意的是,在 Windows 中尚不能在工作进程中建立一个被命名的管道服务器。

它是如何工作的#

The worker processes are spawned using the child_process.fork method, so that they can communicate with the parent via IPC and pass server handles back and forth.

工作进程是通过使用 child_process.fork 方法派生的,因此它们可以通过 IPC(进程间通讯)与父进程通讯并互相传递服务器句柄。

The cluster module supports two methods of distributing incoming connections.

集群模块支持两种分配传入连接的方式。

The first one (and the default one on all platforms except Windows), is the round-robin approach, where the master process listens on a port, accepts new connections and distributes them across the workers in a round-robin fashion, with some built-in smarts to avoid overloading a worker process.

第一种(同时也是除 Windows 外所有平台的缺省方式)为循环式:主进程监听一个端口,接受新连接,并以轮流的方式分配给工作进程,并以一些内建机制来避免单个工作进程的超载。

The second approach is where the master process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly.

第二种方式是,主进程建立监听嵌套字,并将它发送给感兴趣的工作进程,由工作进程直接接受传入连接。

The second approach should, in theory, give the best performance. In practice however, distribution tends to be very unbalanced due to operating system scheduler vagaries. Loads have been observed where over 70% of all connections ended up in just two processes, out of a total of eight.

第二种方式理论上有最好的性能。然而在实践中,由于操作系统的调度变幻莫测,分配往往十分不平衡。负载曾被观测到超过 70% 的连接结束于总共八个进程中的两个。

Because server.listen() hands off most of the work to the master process, there are three cases where the behavior between a normal node.js process and a cluster worker differs:

因为 server.listen() 将大部分工作交给了主进程,所以一个普通的node.js进程和一个集群工作进程会在三种情况下有所区别:

  1. server.listen({fd: 7}) Because the message is passed to the master, file descriptor 7 in the parent will be listened on, and the handle passed to the worker, rather than listening to the worker's idea of what the number 7 file descriptor references.
  2. server.listen(handle) Listening on handles explicitly will cause the worker to use the supplied handle, rather than talk to the master process. If the worker already has the handle, then it's presumed that you know what you are doing.
  3. server.listen(0) Normally, this will cause servers to listen on a random port. However, in a cluster, each worker will receive the same "random" port each time they do listen(0). In essence, the port is random the first time, but predictable thereafter. If you want to listen on a unique port, generate a port number based on the cluster worker ID.
  1. server.listen({fd: 7}) 由于消息被传递到主进程,父进程中的文件描述符 7 会被监听,并且句柄会被传递给工作进程,而不是监听工作进程中文件描述符 7 所引用的东西。
  2. server.listen(handle) 明确地监听一个句柄会使得工作进程使用所给句柄,而不是与主进程通讯。如果工作进程已经拥有了该句柄,则假定您知道您在做什么。
  3. server.listen(0) 通常,这会让服务器监听一个随机端口。然而,在集群中,各个工作进程每次 listen(0) 都会得到一样的“随机”端口。实际上,端口在第一次时是随机的,但在那之后却是可预知的。如果您想要监听一个唯一的端口,则请根据集群工作进程 ID 来生成端口号。

There is no routing logic in Node.js, or in your program, and no shared state between the workers. Therefore, it is important to design your program such that it does not rely too heavily on in-memory data objects for things like sessions and login.

由于在 Node.js 或您的程序中并没有路由逻辑,工作进程之间也没有共享的状态,因此在您的程序中,诸如会话和登录等功能应当被设计成不能太过依赖于内存中的数据对象。

Because workers are all separate processes, they can be killed or re-spawned depending on your program's needs, without affecting other workers. As long as there are some workers still alive, the server will continue to accept connections. Node does not automatically manage the number of workers for you, however. It is your responsibility to manage the worker pool for your application's needs.

由于工作进程都是独立的进程,因此它们会根据您的程序的需要被终止或重新派生,并且不会影响到其它工作进程。只要还有工作进程存在,服务器就会继续接受连接。但是,Node 不会自动为您管理工作进程的数量,根据您的程序所需管理工作进程池是您的责任。

cluster.schedulingPolicy#

The scheduling policy, either cluster.SCHED_RR for round-robin or cluster.SCHED_NONE to leave it to the operating system. This is a global setting and effectively frozen once you spawn the first worker or call cluster.setupMaster(), whatever comes first.

调度策略 cluster.SCHED_RR 表示轮流制,cluster.SCHED_NONE 表示由操作系统处理。这是一个全局设定,并且一旦您派生了第一个工作进程或调用了 cluster.setupMaster() 后便不可更改。

SCHED_RR is the default on all operating systems except Windows. Windows will change to SCHED_RR once libuv is able to effectively distribute IOCP handles without incurring a large performance hit.

SCHED_RR 是除 Windows 外所有操作系统上的缺省方式。只要 libuv 能够有效地分配 IOCP 句柄并且不产生巨大的性能损失,Windows 也将会更改为 SCHED_RR 方式。

cluster.schedulingPolicy can also be set through the NODE_CLUSTER_SCHED_POLICY environment variable. Valid values are "rr" and "none".

cluster.schedulingPolicy 也可以通过环境变量 NODE_CLUSTER_SCHED_POLICY 设定。有效值为 "rr""none"

cluster.settings#

  • Object

    • exec String file path to worker file. (Default=__filename)
    • args Array string arguments passed to worker. (Default=process.argv.slice(2))
    • silent Boolean whether or not to send output to parent's stdio. (Default=false)
  • Object

    • exec String 工作进程文件的路径。(缺省为 __filename
    • args Array 传递给工作进程的字符串参数。(缺省为 process.argv.slice(2)
    • silent Boolean 是否将输出发送到父进程的 stdio。(缺省为 false

All settings set by the .setupMaster is stored in this settings object. This object is not supposed to be changed or set manually, by you.

所有由 .setupMaster 设定的设置都会储存在此设置对象中。这个对象不应由您手动更改或设定。

集群的主进程(判断当前进程是否是主进程)#

  • Boolean

  • Boolean

True if the process is a master. This is determined by the process.env.NODE_UNIQUE_ID. If process.env.NODE_UNIQUE_ID is undefined, then isMaster is true.

如果进程为主进程则为 true。这是由 process.env.NODE_UNIQUE_ID 判断的,如果 process.env.NODE_UNIQUE_ID 为 undefined,则 isMastertrue

集群的主线程(判断当前线程是否是主线程)#

  • Boolean

  • Boolean

This boolean flag is true if the process is a worker forked from a master. If the process.env.NODE_UNIQUE_ID is set to a value, then isWorker is true.

如果当前进程是分支自主进程的工作进程,则该布尔标识的值为 true。如果 process.env.NODE_UNIQUE_ID 被设定为一个值,则 isWorkertrue

事件: 'fork'#

  • worker Worker object

  • worker Worker object

When a new worker is forked the cluster module will emit a 'fork' event. This can be used to log worker activity, and create you own timeout.

当一个新的工作进程被分支出来,cluster 模块会产生一个 'fork' 事件。这可被用于记录工作进程活动,以及创建您自己的超时判断。

cluster.on('fork', function(worker) {
  timeouts[worker.id] = setTimeout(errorMsg, 2000);
});
cluster.on('listening', function(worker, address) {
  clearTimeout(timeouts[worker.id]);
});
cluster.on('exit', function(worker, code, signal) {
  clearTimeout(timeouts[worker.id]);
  errorMsg();
});

事件: 'online'#

  • worker Worker object

  • worker Worker object

After forking a new worker, the worker should respond with a online message. When the master receives a online message it will emit such event. The difference between 'fork' and 'online' is that fork is emitted when the master tries to fork a worker, and 'online' is emitted when the worker is being executed.

分支出一个新的工作进程后,工作进程会响应一个在线消息。当主进程收到一个在线消息后,它会触发该事件。'fork' 和 'online' 的区别在于前者发生于主进程尝试分支出工作进程时,而后者发生于工作进程被执行时。

cluster.on('online', function(worker) {
  console.log("嘿嘿,工作进程完成分支并发出回应了");
});

事件: 'listening'#

  • worker Worker object
  • address Object

  • worker Worker object

  • address Object

When calling listen() from a worker, a 'listening' event is automatically assigned to the server instance. When the server is listening a message is send to the master where the 'listening' event is emitted.

当工作进程调用 listen() 时,一个 listening 事件会被自动分配到服务器实例中。当服务器处于监听时,一个消息会被发送到那个'listening'事件被分发的主进程。

The event handler is executed with two arguments, the worker contains the worker object and the address object contains the following connection properties: address, port and addressType. This is very useful if the worker is listening on more than one address.

事件处理器被执行时会带上两个参数。其中 worker 包含了工作进程对象,address 对象包含了下列连接属性:地址 address、端口号 port 和地址类型 addressType。如果工作进程监听多个地址,那么这些信息将十分有用。

cluster.on('listening', function(worker, address) {
  console.log("一个工作进程刚刚连接到 " + address.address + ":" + address.port);
});

事件: 'disconnect'#

  • worker Worker object

  • worker Worker object

When a workers IPC channel has disconnected this event is emitted. This will happen when the worker dies, usually after calling .kill().

当一个工作进程的 IPC 通道断开时此事件会发生。这发生于工作进程结束时,通常是调用 .kill() 之后。

When calling .disconnect(), there may be a delay between the disconnect and exit events. This event can be used to detect if the process is stuck in a cleanup or if there are long-living connections.

当调用 .disconnect() 后,disconnectexit 事件之间可能存在延迟。该事件可被用于检测进程是否被卡在清理过程或存在长连接。

cluster.on('disconnect', function(worker) {
  console.log('工作进程 #' + worker.id + ' 断开了连接');
});

事件: 'exit'#

  • worker Worker object
  • code Number the exit code, if it exited normally.
  • signal String the name of the signal (eg. 'SIGHUP') that caused the process to be killed.

  • worker Worker object

  • code Number 如果是正常退出则为退出代码。
  • signal String 使得进程被终止的信号的名称(比如 'SIGHUP')。

When any of the workers die the cluster module will emit the 'exit' event. This can be used to restart the worker by calling fork() again.

当任意工作进程被结束时,集群模块会分发exit 事件。通过再次调用fork()函数,可以使用这个事件来重启工作进程。

cluster.on('exit', function(worker, code, signal) {
  var exitCode = worker.process.exitCode;
  console.log('工作进程 ' + worker.process.pid + ' 被结束('+exitCode+')。正在重启...');
  cluster.fork();
});

事件: 'setup'#

  • worker Worker object

  • worker Worker object

When the .setupMaster() function has been executed this event emits. If .setupMaster() was not executed before fork() this function will call .setupMaster() with no arguments.

.setupMaster() 函数被执行时触发此事件。如果 .setupMaster()fork() 之前没被执行,那么它会不带参数调用 .setupMaster()

cluster.setupMaster([settings])#

  • settings Object

    • exec String file path to worker file. (Default=__filename)
    • args Array string arguments passed to worker. (Default=process.argv.slice(2))
    • silent Boolean whether or not to send output to parent's stdio. (Default=false)
  • settings Object

    • exec String 工作进程文件的路径。(缺省为 __filename
    • args Array 传给工作进程的字符串参数。(缺省为 process.argv.slice(2)
    • silent Boolean 是否将输出发送到父进程的 stdio。(缺省为 false

setupMaster is used to change the default 'fork' behavior. The new settings are effective immediately and permanently, they cannot be changed later on.

setupMaster 被用于更改缺省的 fork 行为。新的设置会立即永久生效,并且在之后不能被更改。

Example:

示例:

var cluster = require("cluster");
cluster.setupMaster({
  exec : "worker.js",
  args : ["--use", "https"],
  silent : true
});
cluster.fork();

cluster.fork([env])#

  • env Object Key/value pairs to add to child process environment.
  • return Worker object

  • env Object 添加到子进程环境变量中的键值对。

  • 返回 Worker object

Spawn a new worker process. This can only be called from the master process.

派生一个新的工作进程。这个函数只能在主进程中被调用。

cluster.disconnect([callback])#

  • callback Function called when all workers are disconnected and handlers are closed

  • callback Function 当所有工作进程都断开连接并且句柄被关闭时被调用

When calling this method, all workers will commit a graceful suicide. When they are disconnected all internal handlers will be closed, allowing the master process to die graceful if no other event is waiting.

调用此方法时,所有的工作进程都会优雅地将自己结束掉。当它们都断开连接时,所有的内部处理器都会被关闭,使得主进程可以可以在没有其它事件等待时优雅地结束。

The method takes an optional callback argument which will be called when finished.

该方法带有一个可选的回调参数,会在完成时被调用。

cluster.worker#

  • Object

  • Object

A reference to the current worker object. Not available in the master process.

对当前工作进程对象的引用。在主进程中不可用。

if (cluster.isMaster) {
  console.log('我是主进程');
  cluster.fork();
  cluster.fork();
} else if (cluster.isWorker) {
  console.log('我是工作进程 #' + cluster.worker.id);
}

cluster.workers#

  • Object

  • Object

A hash that stores the active worker objects, keyed by id field. Makes it easy to loop through all the workers. It is only available in the master process.

一个储存活动工作进程对象的哈希表,以 id 字段作为主键。它能被用作遍历所有工作进程,仅在主进程中可用。

// 遍历所有工作进程
function eachWorker(callback) {
  for (var id in cluster.workers) {
    callback(cluster.workers[id]);
  }
}
eachWorker(function(worker) {
  worker.send('向一线工作者们致以亲切问候!');
});

Should you wish to reference a worker over a communication channel, using the worker's unique id is the easiest way to find the worker.

如果您希望通过通讯通道引用一个工作进程,那么使用工作进程的唯一标识是找到那个工作进程的最简单的办法。

socket.on('data', function(id) {
  var worker = cluster.workers[id];
});

类: Worker#

A Worker object contains all public information and method about a worker. In the master it can be obtained using cluster.workers. In a worker it can be obtained using cluster.worker.

一个 Worker 对象包含了工作进程的所有公开信息和方法。可通过主进程中的 cluster.workers 或工作进程中的 cluster.worker 取得。

worker.id#

  • String

  • String

Each new worker is given its own unique id, this id is stored in the id.

每个新的工作进程都被赋予一个唯一的标识,这个标识被储存在 id 中。

While a worker is alive, this is the key that indexes it in cluster.workers

当一个工作进程可用时,这就是它被索引在 cluster.workers 中的主键。

worker.process#

  • ChildProcess object

  • ChildProcess object

All workers are created using child_process.fork(), the returned object from this function is stored in process.

所有工作进程都是使用 child_process.fork() 创建的,该函数返回的对象被储存在 process 中。

See: Child Process module

参考:Child Process 模块

worker.suicide#

  • Boolean

  • Boolean

This property is a boolean. It is set when a worker dies after calling .kill() or immediately after calling the .disconnect() method. Until then it is undefined.

该属性是一个布尔值。它会在工作进程调用 .kill() 后终止时或调用 .disconnect() 方法时被设置。在此之前它的值是 undefined

worker.send(message, [sendHandle])#

  • message Object
  • sendHandle Handle object

  • message Object

  • sendHandle Handle object

This function is equal to the send methods provided by child_process.fork(). In the master you should use this function to send a message to a specific worker. However in a worker you can also use process.send(message), since this is the same function.

该函数等同于 child_process.fork() 提供的 send 方法。在主进程中您可以用该函数向特定工作进程发送消息。当然,在工作进程中您也能使用 process.send(message),因为它们是同一个函数。

This example will echo back all messages from the master:

这个例子会回应来自主进程的所有消息:

} else if (cluster.isWorker) {
  process.on('message', function(msg) {
    process.send(msg);
  });
}

worker.kill([signal='SIGTERM'])#

  • signal String Name of the kill signal to send to the worker process.

  • signal String 发送给工作进程的终止信号的名称

This function will kill the worker, and inform the master to not spawn a new worker. The boolean suicide lets you distinguish between voluntary and accidental exit.

该函数会终止工作进程,并告知主进程不要派生一个新工作进程。布尔值 suicide 让您区分自行退出和意外退出。

// 终止工作进程
worker.kill();

This method is aliased as worker.destroy() for backwards compatibility.

该方法的别名是 worker.destroy(),以保持向后兼容。

worker.disconnect()#

When calling this function the worker will no longer accept new connections, but they will be handled by any other listening worker. Existing connection will be allowed to exit as usual. When no more connections exist, the IPC channel to the worker will close allowing it to die graceful. When the IPC channel is closed the disconnect event will emit, this is then followed by the exit event, there is emitted when the worker finally die.

调用该函数后工作进程将不再接受新连接,但新连接仍会被其它正在监听的工作进程处理。已存在的连接允许正常退出。当没有连接存在,连接到工作进程的 IPC 通道会被关闭,以便工作进程安全地结束。当 IPC 通道关闭时 disconnect 事件会被触发,然后则是工作进程最终结束时触发的 exit 事件。

Because there might be long living connections, it is useful to implement a timeout. This example ask the worker to disconnect and after 2 seconds it will destroy the server. An alternative would be to execute worker.kill() after 2 seconds, but that would normally not allow the worker to do any cleanup if needed.

由于可能存在长连接,通常会实现一个超时机制。这个例子会告知工作进程断开连接,并且在 2 秒后销毁服务器。另一个备选方案是 2 秒后执行 worker.kill(),但那样通常会使得工作进程没有机会进行必要的清理。

  process.on('message', function(msg) {
    if (msg === 'force kill') {
      server.close();
    }
  });
}

事件: 'message'#

  • message Object

  • message Object

This event is the same as the one provided by child_process.fork(). In the master you should use this event, however in a worker you can also use process.on('message')

该事件和 child_process.fork() 所提供的一样。在主进程中您应当使用该事件,而在工作进程中您也可以使用 process.on('message')

As an example, here is a cluster that keeps count of the number of requests in the master process using the message system:

举个例子,这里有一个集群,使用消息系统在主进程中统计请求的数量:

    // 将请求通知主进程
    process.send({ cmd: 'notifyRequest' });
  }).listen(8000);
}

事件: 'online'#

Same as the cluster.on('online') event, but emits only when the state change on the specified worker.

cluster.on('online') 事件一样,但仅当特定工作进程的状态改变时发生。

cluster.fork().on('online', function() {
  // 工作进程在线
});

事件: 'listening'#

  • address Object

  • address Object

Same as the cluster.on('listening') event, but emits only when the state change on the specified worker.

cluster.on('listening') 事件一样,但仅当特定工作进程的状态改变时发生。

cluster.fork().on('listening', function(address) {
  // 工作进程正在监听
});

事件: 'disconnect'#

Same as the cluster.on('disconnect') event, but emits only when the state change on the specified worker.

cluster.on('disconnect') 事件一样,但仅当特定工作进程的状态改变时发生。

cluster.fork().on('disconnect', function() {
  // 工作进程断开了连接
});

事件: 'exit'#

  • code Number the exit code, if it exited normally.
  • signal String the name of the signal (eg. 'SIGHUP') that caused the process to be killed.

  • code Number 如果是正常退出则为退出代码。

  • signal String 使得进程被终止的信号的名称(比如 'SIGHUP')。

Emitted by the individual worker instance, when the underlying child process is terminated. See child_process event: 'exit'.

由单个工作进程实例在底层子进程被结束时触发。详见子进程事件: 'exit'

var worker = cluster.fork();
worker.on('exit', function(code, signal) {
  if( signal ) {
    console.log("worker was killed by signal: "+signal);
  } else if( code !== 0 ) {
    console.log("worker exited with error code: "+code);
  } else {
    console.log("worker success!");
  }
});


var worker = cluster.fork();
worker.on('exit', function(code, signal) {
  if( signal ) {
    console.log("工人被信号 " + signal + " 杀掉了");
  } else if( code !== 0 ) {
    console.log("工作进程退出,错误码:" + code);
  } else {
    console.log("劳动者的胜利!");
  }
});

Smalloc#

稳定度: 1 - 实验性

smalloc.alloc(length[, receiver][, type])#

  • length Number <= smalloc.kMaxLength
  • receiver Object, Optional, Default: new Object
  • type Enum, Optional, Default: Uint8

  • length Number <= smalloc.kMaxLength

  • receiver Object 可选,缺省为 new Object
  • type Enum 可选,缺省为 Uint8

Returns receiver with allocated external array data. If no receiver is passed then a new Object will be created and returned.

返回 receiver 及所分配的外部数组数据。如果未传入 receiver 则会创建并返回一个新的 Object。

Buffers are backed by a simple allocator that only handles the assignation of external raw memory. Smalloc exposes that functionality.

Buffer 后端为一个只处理外部原始内存的分配的简易分配器所支撑。Smalloc 暴露了该功能。

This can be used to create your own Buffer-like classes. No other properties are set, so the user will need to keep track of other necessary information (e.g. length of the allocation).

这可用于创建你自己的类似 Buffer 的类。由于不会设置其它属性,因此使用者需要自行跟踪其它所需信息(比如所分配的长度 length)。

SimpleData.prototype = { /* ... */ };

It only checks if the receiver is an Object, and also not an Array. Because of this it is possible to allocate external array data to more than a plain Object.

它只检查 receiver 是否为一个非 Array 的 Object。因此,可以分配外部数组数据的不止纯 Object。

// { [Function allocMe] '0': 0, '1': 0, '2': 0 }

v8 does not support allocating external array data to an Array, and if passed will throw.

V8 不支持向一个 Array 分配外部数组数据,如果这么做将会抛出异常。

It's possible is to specify the type of external array data you would like. All possible options are listed in smalloc.Types. Example usage:

您可以指定您想要的外部数组数据的类型。所有可取的值都已在 smalloc.Types 中列出。使用示例:

// { '0': 0, '1': 0.1, '2': 0.2 }

smalloc.copyOnto(source, sourceStart, dest, destStart, copyLength);#

  • source Object with external array allocation
  • sourceStart Position to begin copying from
  • dest Object with external array allocation
  • destStart Position to begin copying onto
  • copyLength Length of copy

  • source 分配了外部数组的来源对象

  • sourceStart 从这个位置开始拷贝
  • dest 分配了外部数组的目标对象
  • destStart 拷贝到这个位置
  • copyLength 拷贝的长度

Copy memory from one external array allocation to another. No arguments are optional, and any violation will throw.

从一个外部数组向另一个拷贝内存。所有参数都是必填,否则将会抛出异常。

// { '0': 4, '1': 6, '2': 2, '3': 3 }

copyOnto automatically detects the length of the allocation internally, so no need to set any additional properties for this to work.

copyOnto 会在内部自动检测分配的长度,因此无需对此给出额外的参数。

smalloc.dispose(obj)#

  • obj Object

  • obj 对象

Free memory that has been allocated to an object via smalloc.alloc.

释放已使用 smalloc.alloc 分配到一个对象的内存。

// {}

This is useful to reduce strain on the garbage collector, but developers must be careful. Cryptic errors may arise in applications that are difficult to trace.

这对于减轻垃圾回收器的负担有所帮助,但开发者务必小心。难以跟踪的应用程序可能会发生奇怪的错误。

// 将导致:
// Error: source has no external array data

dispose() does not support Buffers, and will throw if passed.

dispose() 不支持 Buffer,传入将会抛出异常。

smalloc.kMaxLength#

Size of maximum allocation. This is also applicable to Buffer creation.

最大的分配大小。该值同时也适用于 Buffer 的创建。

smalloc.Types#

Enum of possible external array types. Contains:

外部数组类型的可取值,包含:

  • Int8
  • Uint8
  • Int16
  • Uint16
  • Int32
  • Uint32
  • Float
  • Double
  • Uint8Clamped
  • Int8
  • Uint8
  • Int16
  • Uint16
  • Int32
  • Uint32
  • Float
  • Double
  • Uint8Clamped