Aptik(自动包备份和恢复)是一个可以用在Ubuntu,Linux Mint 和其他基于Debian以及Ubuntu的Linux发行版上的应用,它允许你将已经安装过的包括软件库、下载包、安装的应用和主题、用户设置在内的PPAs(个人软件包存档)备份到外部的U盘、网络存储或者类似于Dropbox的云服务上。
随着基于 Web 的应用和服务的增多,IT 系统管理员肩上的责任也越来越重。当遇到不可预期的事件如流量达到高峰,流量增大或者内部的挑战比如硬件的损坏或紧急维修,无论如何,你的 Web 应用都必须要保持可用性。甚至现在流行的 devops 和持续交付(CD)也可能威胁到你的 Web 服务的可靠性和性能的一致性。
通常有两种方式来对镜像打标签:使用docker tag命令或者是在执行docker build的时候用-t来传递参数。在这两种情况下,参数的形式通常是repository_name:tag_name,例如:docker tag myrepo:mytag。如果这个资源库被上传到了Docker Hub,资源库的名字会加上一个由Docker Hub用户名和斜线组成的前缀,例如:amouat/myrepo:mytag。如果没有添加tag部分的参数,例如:docker tag myrepo:1.0 myrepo,Docker会自动的给它latest标签。前面这些内容或许你已经熟知,其实它也就这点内容,并没有什么神奇的地方。
$ docker images debian
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
debian latest 4d6ce913b130 4 days ago 84.98 MB
额,不知道。事实上是7.8 wheezy版本。
$ docker pull debian:7.8
debian:7.8: The image you are pulling has been verified
511136ea3c5a: Already exists
d0a18d3b84de: Already exists
4d6ce913b130: Already exists
Status: Image is up to date for debian:7.8
$ docker pull debian:wheezy
debian:wheezy: The image you are pulling has been verified
511136ea3c5a: Already exists
d0a18d3b84de: Already exists
4d6ce913b130: Already exists
Status: Image is up to date for debian:wheezy
$ docker images debian
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
debian 7.8 4d6ce913b130 4 days ago 84.98 MB
debian latest 4d6ce913b130 4 days ago 84.98 MB
debian wheezy 4d6ce913b130 4 days ago 84.98 MB
class StorageLevel private(
private var _useDisk: Boolean,
private var _useMemory: Boolean,
private var _useOffHeap: Boolean,
private var _deserialized: Boolean
private var _replication: Int = 1)
extends Externalizable { //… }
val NONE = new StorageLevel(false, false, false, false)
val DISK_ONLY = new StorageLevel(true, false, false, false)
val DISK_ONLY_2 = new StorageLevel(true, false, false, false, 2)
val MEMORY_ONLY = new StorageLevel(false, true, false, true)
val MEMORY_ONLY_2 = new StorageLevel(false, true, false, true, 2)
val MEMORY_ONLY_SER = new StorageLevel(false, true, false, false)
val MEMORY_ONLY_SER_2 = new StorageLevel(false, true, false, false, 2)
val MEMORY_AND_DISK = new StorageLevel(true, true, false, true)
val MEMORY_AND_DISK_2 = new StorageLevel(true, true, false, true, 2)
val MEMORY_AND_DISK_SER = new StorageLevel(true, true, false, false)
val MEMORY_AND_DISK_SER_2 = new StorageLevel(true, true, false, false, 2)
val OFF_HEAP = new StorageLevel(false, false, true, false)
/**
* Get or create the list of parent stages for a given RDD. The stages will be assigned the
* provided jobId if they haven't already been created with a lower jobId.
*/
private def getParentStages(rdd: RDD[_], jobId: Int): List[Stage] = {
val parents = new HashSet[Stage]
val visited = new HashSet[RDD[_]]
// We are manually maintaining a stack here to prevent StackOverflowError
// caused by recursively visiting
val waitingForVisit = new Stack[RDD[_]]
def visit(r: RDD[_]) {
if (!visited(r)) {
visited += r
// Kind of ugly: need to register RDDs with the cache here since
// we can't do it in its constructor because # of partitions is unknown
for (dep <- r.dependencies) {
dep match {
case shufDep: ShuffleDependency[_, _, _] =>
parents += getShuffleMapStage(shufDep, jobId)
case _ =>
waitingForVisit.push(dep.rdd)
}
}
}
}
waitingForVisit.push(rdd)
while (!waitingForVisit.isEmpty) {
visit(waitingForVisit.pop())
}
parents.toList
}
abstract class RDD[T: ClassTag](
@transient private var sc: SparkContext,
@transient private var deps: Seq[Dependency[_]]
) extends Serializable with Logging {
//….
/**
* Return an array that contains all of the elements in this RDD.
*/
def collect(): Array[T] = {
val results = sc.runJob(this, (iter: Iterator[T]) => iter.toArray)
Array.concat(results: _*)
}
}
/**
* Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint
* directory set with SparkContext.setCheckpointDir() and all references to its parent
* RDDs will be removed. This function must be called before any job has been
* executed on this RDD. It is strongly recommended that this RDD is persisted in
* memory, otherwise saving it on a file will require recomputation.
*/
def checkpoint() {
if (context.checkpointDir.isEmpty) {
throw new SparkException("Checkpoint directory has not been set in the SparkContext")
} else if (checkpointData.isEmpty) {
checkpointData = Some(new RDDCheckpointData(this))
checkpointData.get.markForCheckpoint()
}
}
错误: 依赖失败:
libICE.so.6 is needed by kingsoft-office-9.1.0.4244-0.1.a12p3.i686
libSM.so.6 is needed by kingsoft-office-9.1.0.4244-0.1.a12p3.i686
libX11.so.6 is needed by kingsoft-office-9.1.0.4244-0.1.a12p3.i686
libXext.so.6 is needed by kingsoft-office-9.1.0.4244-0.1.a12p3.i686
libXrender.so.1 is needed by kingsoft-office-9.1.0.4244-0.1.a12p3.i686
libc.so.6 is needed by kingsoft-office-9.1.0.4244-0.1.a12p3.i686
由于上述问题的存在,Hadoop2.4.0中添加了对HDFS ACL(Access Control Lists)的支持。这一新特性很好地解决了上述的问题。然而随着Hadoop在企业中广泛地应用,越来越多的业务场景要求大数据访问控制的粒度也不再局限在文件级别,而是更加细致地约束文件内部的数据哪些能被读写,哪些只能被读,哪些完全不允许被访问。对于基于SQL的大数据引擎来说,数据访问不止要到表粒度,更要精确到行列级别。
MongoDB shell version: 2.6.5
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
>
> rs.initiate()
{
"info2" : "no configuration explicitly specified -- making one",
"me" : "node1.example.com:27017",
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
>
/admin/ { ... } # any line that contains 'admin'
/^admin/ { ... } # lines that begin with 'admin'
/admin$/ { ... } # lines that end with 'admin'
/^[0-9.]+ / { ... } # lines beginning with series of numbers and periods
/(POST|PUT|DELETE)/ # lines that contain specific HTTP verbs
{ print $0; } # prints $0. In this case, equivalent to 'print' alone
{ exit; } # ends the program
{ next; } # skips to the next line of input
{ a=$1; b=$0 } # variable assignment
{ c[$1] = $2 } # variable assignment (array)
{ if (BOOLEAN) { ACTION }
else if (BOOLEAN) { ACTION }
else { ACTION }
}
{ for (i=1; i
# function arguments are call-by-value
function name(parameter-list) {
ACTIONS; # same actions as usual
}
# return is a valid keyword
function add1(val) {
return val+1;
}
BEGIN { # Can be modified by the user
FS = ","; # Field Separator
RS = "n"; # Record Separator (lines)
OFS = " "; # Output Field Separator
ORS = "n"; # Output Record Separator (lines)
}
{ # Can't be modified by the user
NF # Number of Fields in the current Record (line)
NR # Number of Records seen so far
ARGV / ARGC # Script Arguments
}