e*n
2 楼
【 以下文字转载自 Belief 讨论区 】
发信人: deliver (自动发信系统), 信区: Belief
标 题: [通知] Belief 举办投票: 宗教信仰和道德
发信站: BBS 未名空间站自动发信系统 (Fri Oct 23 11:33:57 2009)
【此篇文章是由自动发信系统所张贴】
⊙ 投票开启于:Fri Oct 23 11:33:57 2009 类别:复选
⊙ 主题:宗教信仰和道德
⊙ 票选题目描述:
注:宗教信仰指的是各种宗教
谢谢参与
有包子
【选项如下】
(1) 有宗教信仰的人群道德水平比平均水平高
(2) 有宗教信仰的人群道德水平比平均水平低
(3) 某宗教信仰的人群道德水平比平均水平高
(4) 某宗教信仰的人群道德水平比平均水平低
(5) 无神论者的道德水平比平均水平高
(6) 无神论者的道德水平比平均水平低
(7) 宗教信仰和道德水平没有关系
发信人: deliver (自动发信系统), 信区: Belief
标 题: [通知] Belief 举办投票: 宗教信仰和道德
发信站: BBS 未名空间站自动发信系统 (Fri Oct 23 11:33:57 2009)
【此篇文章是由自动发信系统所张贴】
⊙ 投票开启于:Fri Oct 23 11:33:57 2009 类别:复选
⊙ 主题:宗教信仰和道德
⊙ 票选题目描述:
注:宗教信仰指的是各种宗教
谢谢参与
有包子
【选项如下】
(1) 有宗教信仰的人群道德水平比平均水平高
(2) 有宗教信仰的人群道德水平比平均水平低
(3) 某宗教信仰的人群道德水平比平均水平高
(4) 某宗教信仰的人群道德水平比平均水平低
(5) 无神论者的道德水平比平均水平高
(6) 无神论者的道德水平比平均水平低
(7) 宗教信仰和道德水平没有关系
c*0
4 楼
很早之前我就说了docker和其对应生态系统会统治devops领域,当时没几个人相信。
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
c*0
5 楼
另外一个很可能红起来的是coreos。有兴趣的可以去看看。
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
n*1
7 楼
我信
还有个东西叫zerovm,基于chrome的native client技术,能够摆脱MMU,做到软件层上的
内存保护,前途不可限量啊。
还有个东西叫zerovm,基于chrome的native client技术,能够摆脱MMU,做到软件层上的
内存保护,前途不可限量啊。
Y*G
8 楼
这东西是基于Linux container上的,显然在很多情况下要比VM的方案要好,高效和low
overhead,看来以后VMWare有点玄。
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
overhead,看来以后VMWare有点玄。
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
Y*G
9 楼
http://www.linuxjournal.com/content/containers%E2%80%94not-virt
Cloud infrastructure providers like Amazon Web Service sell virtual
machines. EC2 revenue is expected to surpass $1B in revenue this year. That'
s a lot of VMs.
It's not hard to see why there is such demand. You get the ability to scale
up or down, guaranteed computational resources, security isolation and API
access for provisioning it all, without any of the overhead of managing
physical servers.
But, you are also paying for lot of increasingly avoidable overhead in the
form of running a full-blown operating system image for each virtual machine
. This approach has become an unnecessarily heavyweight solution to the
underlying question of how to best run applications in the cloud.
Figure 1. Traditional virtualization and paravirtualization require a full
operating system image for each instance.
Until recently it has been assumed that OS virtualization is the only path
to provide appropriate isolation for applications running on a server. These
assumptions are quickly becoming dated, thanks to recent underlying
improvements to how the Linux kernel can now manage isolation between
applications.
Containers now can be used as an alternative to OS-level virtualization to
run multiple isolated systems on a single host. Containers within a single
operating system are much more efficient, and because of this efficiency,
they will underpin the future of the cloud infrastructure industry in place
of VM architecture.
Figure 2. Containers can share a single operating system and, optionally,
other binary and library resources.
How We Got Here
There is a good reason why we buy by the virtual machine today: containers
used to be terrible, if they existed in any useful form at all. Let's hop
back to 2005 for a moment. "chroot" certainly didn't (and still doesn't)
meet the resource and security isolation goals for multi-tenant designs. "
nice" is a winner-takes-all scheduling mechanism. The "fair" resource
scheduling in the kernel is often too fair, equally balancing resources
between a hungry, unimportant process and a hungry, important one. Memory
and file descriptor limits offer no gradient between normal operation and
crashing an application that's overstepped its boundaries.
Virtual machines were able to partition and distribute resources viably in
the hypervisor without relying on kernel support or, worse, separate
hardware. For a long time, virtual machines were the only way on Linux to
give Application A up to 80% of CPU resources and Application B up to 20%.
Similar partitioning and sharing schemes exist for memory, disk block I/O,
network I/O and other contentious resources.
Virtual machines have made major leaps in efficiency too. What used to be
borderline-emulation has moved to direct hardware support for memory page
mapping and other hard-to-virtualize features. We're down to a CPU penalty
of only a few percent versus direct hardware use.
...
low
【在 Y**G 的大作中提到】
: 这东西是基于Linux container上的,显然在很多情况下要比VM的方案要好,高效和low
: overhead,看来以后VMWare有点玄。
Cloud infrastructure providers like Amazon Web Service sell virtual
machines. EC2 revenue is expected to surpass $1B in revenue this year. That'
s a lot of VMs.
It's not hard to see why there is such demand. You get the ability to scale
up or down, guaranteed computational resources, security isolation and API
access for provisioning it all, without any of the overhead of managing
physical servers.
But, you are also paying for lot of increasingly avoidable overhead in the
form of running a full-blown operating system image for each virtual machine
. This approach has become an unnecessarily heavyweight solution to the
underlying question of how to best run applications in the cloud.
Figure 1. Traditional virtualization and paravirtualization require a full
operating system image for each instance.
Until recently it has been assumed that OS virtualization is the only path
to provide appropriate isolation for applications running on a server. These
assumptions are quickly becoming dated, thanks to recent underlying
improvements to how the Linux kernel can now manage isolation between
applications.
Containers now can be used as an alternative to OS-level virtualization to
run multiple isolated systems on a single host. Containers within a single
operating system are much more efficient, and because of this efficiency,
they will underpin the future of the cloud infrastructure industry in place
of VM architecture.
Figure 2. Containers can share a single operating system and, optionally,
other binary and library resources.
How We Got Here
There is a good reason why we buy by the virtual machine today: containers
used to be terrible, if they existed in any useful form at all. Let's hop
back to 2005 for a moment. "chroot" certainly didn't (and still doesn't)
meet the resource and security isolation goals for multi-tenant designs. "
nice" is a winner-takes-all scheduling mechanism. The "fair" resource
scheduling in the kernel is often too fair, equally balancing resources
between a hungry, unimportant process and a hungry, important one. Memory
and file descriptor limits offer no gradient between normal operation and
crashing an application that's overstepped its boundaries.
Virtual machines were able to partition and distribute resources viably in
the hypervisor without relying on kernel support or, worse, separate
hardware. For a long time, virtual machines were the only way on Linux to
give Application A up to 80% of CPU resources and Application B up to 20%.
Similar partitioning and sharing schemes exist for memory, disk block I/O,
network I/O and other contentious resources.
Virtual machines have made major leaps in efficiency too. What used to be
borderline-emulation has moved to direct hardware support for memory page
mapping and other hard-to-virtualize features. We're down to a CPU penalty
of only a few percent versus direct hardware use.
...
low
【在 Y**G 的大作中提到】
: 这东西是基于Linux container上的,显然在很多情况下要比VM的方案要好,高效和low
: overhead,看来以后VMWare有点玄。
g*g
11 楼
Interesting idea, but double efficiency doesn't sound that big a deal, it's
a one-time gain. You
probably lose a lot of flexibility working on top of a container rather than
an OS.
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
a one-time gain. You
probably lose a lot of flexibility working on top of a container rather than
an OS.
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
c*0
13 楼
http://www.brightball.com/development/docker-is-the-heroku-kill
【在 d*******r 的大作中提到】
: 大牛说说,跟上一代的 devops tools: chef, puppet, salt, ansible
: 主要优点是什么呢?
【在 d*******r 的大作中提到】
: 大牛说说,跟上一代的 devops tools: chef, puppet, salt, ansible
: 主要优点是什么呢?
g*g
17 楼
I think it's like the debate of PaaS vs. LaaS. For most serious applications
, LaaS is still the way to go, at least for now. This container based is
more like middle ground. We'll see.
【在 c*******0 的大作中提到】
: No you don't. in contrast you gain a lot of predictability by working with
: container
:
: s
: than
, LaaS is still the way to go, at least for now. This container based is
more like middle ground. We'll see.
【在 c*******0 的大作中提到】
: No you don't. in contrast you gain a lot of predictability by working with
: container
:
: s
: than
d*r
18 楼
看了下,一个 app/service 就一套自己的 Linux 环境配置了?比如一个 MongoDB 一
套自己的配置,然后就容易在不同host上迁移? 貌似挺合理的,那代价就是多些冗余的
但是又互相独立的 Linux 配置文件?
【在 c*******0 的大作中提到】
: http://www.brightball.com/development/docker-is-the-heroku-kill
套自己的配置,然后就容易在不同host上迁移? 貌似挺合理的,那代价就是多些冗余的
但是又互相独立的 Linux 配置文件?
【在 c*******0 的大作中提到】
: http://www.brightball.com/development/docker-is-the-heroku-kill
c*0
19 楼
applications
首先你是想说IAAS吧?
其次,docker牛逼之处就在于他让你无比轻松地在IAAS上做PAAS。这不是IAAS和PAAS之
争,而是融合。
比如说EC2是IAAS,appengine或者Heroku是PAAS,但你完全可以用Docker非常简单地在
EC2上搭建你自己地Heroku和Appengine。甚至不限制于EC2,你自己一堆实体机器一样
搞。
当然你也可以不搭建PAAS,比如直接用Docker + CoreOS部署在EC2上。同样是比现有
方案好很多
而Docker不仅仅限于这个。另一个有意思的东西是Docker让你很容易有完全统一的开发
环境和部署环境,而你随便部署多少台机器都是分分钟的事情,因为就是run一个进程
而已,然后永远不会有“配置不一致”的问题
我觉得你对Docker不够了解,你要是有兴趣看看Docker怎么和IAAS配合的话,这几个项
目可以看看
https://github.com/progrium/dokku
https://flynn.io/
http://deis.io/
Docker真的是一个革命性东西(指用途,不是技术),需要深入看看才能知道它牛逼的
地方
【在 g*****g 的大作中提到】
: I think it's like the debate of PaaS vs. LaaS. For most serious applications
: , LaaS is still the way to go, at least for now. This container based is
: more like middle ground. We'll see.
d*r
20 楼
这个是实现原理么
http://en.wikipedia.org/wiki/File:Linux_kernel_unified_hierarch
docker其实不需要跟上一代 devops tool (chef, ansible) 配合吧?
【在 d*******r 的大作中提到】
: 看了下,一个 app/service 就一套自己的 Linux 环境配置了?比如一个 MongoDB 一
: 套自己的配置,然后就容易在不同host上迁移? 貌似挺合理的,那代价就是多些冗余的
: 但是又互相独立的 Linux 配置文件?
http://en.wikipedia.org/wiki/File:Linux_kernel_unified_hierarch
docker其实不需要跟上一代 devops tool (chef, ansible) 配合吧?
【在 d*******r 的大作中提到】
: 看了下,一个 app/service 就一套自己的 Linux 环境配置了?比如一个 MongoDB 一
: 套自己的配置,然后就容易在不同host上迁移? 貌似挺合理的,那代价就是多些冗余的
: 但是又互相独立的 Linux 配置文件?
c*0
21 楼
一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
的container都是复用下面的container
给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
那你的container简单来讲就是
Linux container
Nodejs container
Python or Go container
实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
的或者Go specific的container。
这个东西我这里解释太麻烦,你还是去看看Docker教程吧。可以从这里开始
http://flux7.com/blogs/docker/docker-tutorial-series-part-1-an-
http://flux7.com/blogs/docker/docker-tutorial-series-part-2-the
http://flux7.com/blogs/docker/docker-tutorial-series-part-3-aut
【在 d*******r 的大作中提到】
: 看了下,一个 app/service 就一套自己的 Linux 环境配置了?比如一个 MongoDB 一
: 套自己的配置,然后就容易在不同host上迁移? 貌似挺合理的,那代价就是多些冗余的
: 但是又互相独立的 Linux 配置文件?
n*1
22 楼
PaaS往往限制了能使用的语言和类库,所以连跑个JNI或python C module都可能被禁。
IaaS就是连内核都是单独运行。
docker能做到的就是跑任何用户态程序,无论语言无论类库配置文件。除非应用需要设
计内核编程,或者要利用某个特殊内核功能而公共内核没有, 否则docker都能游刃有
余。
applications
【在 g*****g 的大作中提到】
: I think it's like the debate of PaaS vs. LaaS. For most serious applications
: , LaaS is still the way to go, at least for now. This container based is
: more like middle ground. We'll see.
IaaS就是连内核都是单独运行。
docker能做到的就是跑任何用户态程序,无论语言无论类库配置文件。除非应用需要设
计内核编程,或者要利用某个特殊内核功能而公共内核没有, 否则docker都能游刃有
余。
applications
【在 g*****g 的大作中提到】
: I think it's like the debate of PaaS vs. LaaS. For most serious applications
: , LaaS is still the way to go, at least for now. This container based is
: more like middle ground. We'll see.
d*r
23 楼
多谢,make sense
我还有个重要问题,上一代 devops 的工具,我觉得最大问题是,debug info 严重缺
乏,特别是对于我这种devops菜鸟来说。
配置错了,对应的就是一黑盒子,不知道该去哪里看错误信息。是去看 Linux sys log
呢,还是去看 chef 的什么 log 呢? 这个问题在配置文件复杂后,就更加可怕了。
我思考这个原因,觉得主要问题是,chef 这些东西对于 Linux 原生那一套配置方法 (
shell为主), 算是外来物,Linux 配置状态跟它们没什么显示合作吧。再加上我觉得
Linux 原生配置方案,错误信息也难找 --- 各个Linux distribution 对于同一个
service 什么的,配置文件和 log 输出地方可能不一样!
docker 在这个重要方面,有改进吗?
配置错了,容易 test 和 有友善的 debug info 吗?
如何解决 service/app 在不同 Linux distribution 中配置不同的问题呢?
specific
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
我还有个重要问题,上一代 devops 的工具,我觉得最大问题是,debug info 严重缺
乏,特别是对于我这种devops菜鸟来说。
配置错了,对应的就是一黑盒子,不知道该去哪里看错误信息。是去看 Linux sys log
呢,还是去看 chef 的什么 log 呢? 这个问题在配置文件复杂后,就更加可怕了。
我思考这个原因,觉得主要问题是,chef 这些东西对于 Linux 原生那一套配置方法 (
shell为主), 算是外来物,Linux 配置状态跟它们没什么显示合作吧。再加上我觉得
Linux 原生配置方案,错误信息也难找 --- 各个Linux distribution 对于同一个
service 什么的,配置文件和 log 输出地方可能不一样!
docker 在这个重要方面,有改进吗?
配置错了,容易 test 和 有友善的 debug info 吗?
如何解决 service/app 在不同 Linux distribution 中配置不同的问题呢?
specific
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
n*1
29 楼
They are not performance friendly to JIT, but you can always use interpreted
languages like python/ruby without performance cut.
In addition, golang 1.3 will official support NaCl. Go will not suffer
performance loss because they do not use JIT.
【在 f******2 的大作中提到】
: This one is just a sandbox like thing, a modified compiler to check code
: safety.
: only works for c programs, no big application case
languages like python/ruby without performance cut.
In addition, golang 1.3 will official support NaCl. Go will not suffer
performance loss because they do not use JIT.
【在 f******2 的大作中提到】
: This one is just a sandbox like thing, a modified compiler to check code
: safety.
: only works for c programs, no big application case
c*e
31 楼
没有 service/app 在不同 Linux distribution 中配置不同的问题了, 你的
distribution based on docker. 你选一个 linux distribution 来做 docker 的
image 就好了。
log
(
【在 d*******r 的大作中提到】
: 多谢,make sense
: 我还有个重要问题,上一代 devops 的工具,我觉得最大问题是,debug info 严重缺
: 乏,特别是对于我这种devops菜鸟来说。
: 配置错了,对应的就是一黑盒子,不知道该去哪里看错误信息。是去看 Linux sys log
: 呢,还是去看 chef 的什么 log 呢? 这个问题在配置文件复杂后,就更加可怕了。
: 我思考这个原因,觉得主要问题是,chef 这些东西对于 Linux 原生那一套配置方法 (
: shell为主), 算是外来物,Linux 配置状态跟它们没什么显示合作吧。再加上我觉得
: Linux 原生配置方案,错误信息也难找 --- 各个Linux distribution 对于同一个
: service 什么的,配置文件和 log 输出地方可能不一样!
: docker 在这个重要方面,有改进吗?
distribution based on docker. 你选一个 linux distribution 来做 docker 的
image 就好了。
log
(
【在 d*******r 的大作中提到】
: 多谢,make sense
: 我还有个重要问题,上一代 devops 的工具,我觉得最大问题是,debug info 严重缺
: 乏,特别是对于我这种devops菜鸟来说。
: 配置错了,对应的就是一黑盒子,不知道该去哪里看错误信息。是去看 Linux sys log
: 呢,还是去看 chef 的什么 log 呢? 这个问题在配置文件复杂后,就更加可怕了。
: 我思考这个原因,觉得主要问题是,chef 这些东西对于 Linux 原生那一套配置方法 (
: shell为主), 算是外来物,Linux 配置状态跟它们没什么显示合作吧。再加上我觉得
: Linux 原生配置方案,错误信息也难找 --- 各个Linux distribution 对于同一个
: service 什么的,配置文件和 log 输出地方可能不一样!
: docker 在这个重要方面,有改进吗?
g*g
32 楼
有点意思,回头研究一下。
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
w*g
33 楼
我觉得复用其实没太大必要。复用多了就退回到OS了。一个container跑一个app,OS负
责dedupe优化什么的,即使要复用也必须是transparent的。这叫sepration of
concern。
specific
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
责dedupe优化什么的,即使要复用也必须是transparent的。这叫sepration of
concern。
specific
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
d*i
34 楼
这个docker应该不算什么新东西,OS container在几乎所有主要的OS下面都有吧:
http://en.wikipedia.org/wiki/Operating_system%E2%80%93level_vir
实现和功能不同而已。
specific
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
http://en.wikipedia.org/wiki/Operating_system%E2%80%93level_vir
实现和功能不同而已。
specific
【在 c*******0 的大作中提到】
:
: 一开始你可以简单这么想,但实际上没有什么冗余,因为Docker非常模块化,很多上面
: 的container都是复用下面的container
: 给你举个简单例子,你有一个nodejs + python的app,还有一个nodejs + go的app,
: 那你的container简单来讲就是
: Linux container
: Nodejs container
: Python or Go container
: 实际上关于Linux和Nodejs的配置都是大家复用的。在这基础上你再有Python specific
: 的或者Go specific的container。
c*0
35 楼
They are different. LXC is the underlying base for Docker, chroot... I don't
even know where to start with.
like I said , Docker is nothing new in terms of technology, but innovative
in terms of usage case
【在 d****i 的大作中提到】
: 这个docker应该不算什么新东西,OS container在几乎所有主要的OS下面都有吧:
: http://en.wikipedia.org/wiki/Operating_system%E2%80%93level_vir
: 实现和功能不同而已。
:
: specific
even know where to start with.
like I said , Docker is nothing new in terms of technology, but innovative
in terms of usage case
【在 d****i 的大作中提到】
: 这个docker应该不算什么新东西,OS container在几乎所有主要的OS下面都有吧:
: http://en.wikipedia.org/wiki/Operating_system%E2%80%93level_vir
: 实现和功能不同而已。
:
: specific
n*t
36 楼
这东西不就是cgroup+namespace么?都出来好多年了,怎么就成"Next big deal"了?
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
d*r
42 楼
我还是最关心 debug 的问题,配置出了问题,是否能在固定的地方,找到 user
friendly 的错误信息。
debug 的时候,原生 Linux cfg states 和 docker 的 go 模块有多少合作。
cgroups 上面那些 container,docker 的 go 模块到底理解控制他们到啥程度, 大体
上怎么实现的?
http://en.wikipedia.org/wiki/File:Linux_kernel_unified_hierarch
【在 c*******0 的大作中提到】
:
: 感情我上面说了这么多都是白说了
: Container复用 + GIT类型操作 + PAAS生态,这难道不是解决devops的大问题?
friendly 的错误信息。
debug 的时候,原生 Linux cfg states 和 docker 的 go 模块有多少合作。
cgroups 上面那些 container,docker 的 go 模块到底理解控制他们到啥程度, 大体
上怎么实现的?
http://en.wikipedia.org/wiki/File:Linux_kernel_unified_hierarch
【在 c*******0 的大作中提到】
:
: 感情我上面说了这么多都是白说了
: Container复用 + GIT类型操作 + PAAS生态,这难道不是解决devops的大问题?
t*h
44 楼
牛 第一次听说 谁给解释一下
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
n*1
45 楼
本着Explicit is better than implicit的原则,除非用户自己显式设置,否则就是
zero sharing.
【在 d*******r 的大作中提到】
: 我还是最关心 debug 的问题,配置出了问题,是否能在固定的地方,找到 user
: friendly 的错误信息。
: debug 的时候,原生 Linux cfg states 和 docker 的 go 模块有多少合作。
: cgroups 上面那些 container,docker 的 go 模块到底理解控制他们到啥程度, 大体
: 上怎么实现的?
: http://en.wikipedia.org/wiki/File:Linux_kernel_unified_hierarch
zero sharing.
【在 d*******r 的大作中提到】
: 我还是最关心 debug 的问题,配置出了问题,是否能在固定的地方,找到 user
: friendly 的错误信息。
: debug 的时候,原生 Linux cfg states 和 docker 的 go 模块有多少合作。
: cgroups 上面那些 container,docker 的 go 模块到底理解控制他们到啥程度, 大体
: 上怎么实现的?
: http://en.wikipedia.org/wiki/File:Linux_kernel_unified_hierarch
c*e
46 楼
看来很多人对于LXC看的多,觉得就是个container.其实docker把unionfs弄到一起才是
关键。这样computaton, storage不但隔离,还可以继承,大大增加了可管理性。不只
是devops,我们的产品,里面的data processing module,全是做成docker image,
dependency管理方便了非常多。
关键。这样computaton, storage不但隔离,还可以继承,大大增加了可管理性。不只
是devops,我们的产品,里面的data processing module,全是做成docker image,
dependency管理方便了非常多。
d*r
47 楼
好像还是没有人具体回答我说的 debug 错误的问题呀
e*s
49 楼
f*e
51 楼
I will go to the dockercon next week. Anybody else going?
We are seriously looking into docker now. Currently we are using AMIs
because of the need of autoscaling, but it's inflexible and the creation and
copying to other regions are time-consuming. Docker looks to me like a
life saver, as it can greatly simplify and shorten the time of CI/CD process
.
We are seriously looking into docker now. Currently we are using AMIs
because of the need of autoscaling, but it's inflexible and the creation and
copying to other regions are time-consuming. Docker looks to me like a
life saver, as it can greatly simplify and shorten the time of CI/CD process
.
d*r
52 楼
没看出来 docker 能省你说的时间呢
and
process
【在 f*****e 的大作中提到】
: I will go to the dockercon next week. Anybody else going?
: We are seriously looking into docker now. Currently we are using AMIs
: because of the need of autoscaling, but it's inflexible and the creation and
: copying to other regions are time-consuming. Docker looks to me like a
: life saver, as it can greatly simplify and shorten the time of CI/CD process
: .
and
process
【在 f*****e 的大作中提到】
: I will go to the dockercon next week. Anybody else going?
: We are seriously looking into docker now. Currently we are using AMIs
: because of the need of autoscaling, but it's inflexible and the creation and
: copying to other regions are time-consuming. Docker looks to me like a
: life saver, as it can greatly simplify and shorten the time of CI/CD process
: .
l*t
53 楼
今天Hadoop summit一堆提到docker的。得研究一下了。
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
f*e
54 楼
Disclaimer: I just start to learn docker, so my answers may not be correct.
1. docker can be think as lightweight vm. Therefore creating and copying
take less time compared to a full-blown vm.
2. more importantly, docker is intelligent about its content. If you change
a byte in a vm, you need to create a brand new vm for the change. For
docker, you just need to apply the one-byte change to the current container,
and you got a brand new container with the change.
So instead of baking a new AMI for each deployment, I just need to create a
new docker container. This should save me around 6 mins, then another 6
mins for copying the AMI to another region.
【在 d*******r 的大作中提到】
: 没看出来 docker 能省你说的时间呢
:
: and
: process
1. docker can be think as lightweight vm. Therefore creating and copying
take less time compared to a full-blown vm.
2. more importantly, docker is intelligent about its content. If you change
a byte in a vm, you need to create a brand new vm for the change. For
docker, you just need to apply the one-byte change to the current container,
and you got a brand new container with the change.
So instead of baking a new AMI for each deployment, I just need to create a
new docker container. This should save me around 6 mins, then another 6
mins for copying the AMI to another region.
【在 d*******r 的大作中提到】
: 没看出来 docker 能省你说的时间呢
:
: and
: process
g*g
55 楼
In CI, such saving is not that significant, you are not waiting for it to
get live.
change
container,
a
【在 f*****e 的大作中提到】
: Disclaimer: I just start to learn docker, so my answers may not be correct.
: 1. docker can be think as lightweight vm. Therefore creating and copying
: take less time compared to a full-blown vm.
: 2. more importantly, docker is intelligent about its content. If you change
: a byte in a vm, you need to create a brand new vm for the change. For
: docker, you just need to apply the one-byte change to the current container,
: and you got a brand new container with the change.
: So instead of baking a new AMI for each deployment, I just need to create a
: new docker container. This should save me around 6 mins, then another 6
: mins for copying the AMI to another region.
get live.
change
container,
a
【在 f*****e 的大作中提到】
: Disclaimer: I just start to learn docker, so my answers may not be correct.
: 1. docker can be think as lightweight vm. Therefore creating and copying
: take less time compared to a full-blown vm.
: 2. more importantly, docker is intelligent about its content. If you change
: a byte in a vm, you need to create a brand new vm for the change. For
: docker, you just need to apply the one-byte change to the current container,
: and you got a brand new container with the change.
: So instead of baking a new AMI for each deployment, I just need to create a
: new docker container. This should save me around 6 mins, then another 6
: mins for copying the AMI to another region.
P*i
56 楼
确实值得研究下。
问题是你怎么和autoscaling一起用?感觉aws会出一个类似的。
另外,每个container怎么配置,比如security啥的。
change
container,
a
【在 f*****e 的大作中提到】
: Disclaimer: I just start to learn docker, so my answers may not be correct.
: 1. docker can be think as lightweight vm. Therefore creating and copying
: take less time compared to a full-blown vm.
: 2. more importantly, docker is intelligent about its content. If you change
: a byte in a vm, you need to create a brand new vm for the change. For
: docker, you just need to apply the one-byte change to the current container,
: and you got a brand new container with the change.
: So instead of baking a new AMI for each deployment, I just need to create a
: new docker container. This should save me around 6 mins, then another 6
: mins for copying the AMI to another region.
问题是你怎么和autoscaling一起用?感觉aws会出一个类似的。
另外,每个container怎么配置,比如security啥的。
change
container,
a
【在 f*****e 的大作中提到】
: Disclaimer: I just start to learn docker, so my answers may not be correct.
: 1. docker can be think as lightweight vm. Therefore creating and copying
: take less time compared to a full-blown vm.
: 2. more importantly, docker is intelligent about its content. If you change
: a byte in a vm, you need to create a brand new vm for the change. For
: docker, you just need to apply the one-byte change to the current container,
: and you got a brand new container with the change.
: So instead of baking a new AMI for each deployment, I just need to create a
: new docker container. This should save me around 6 mins, then another 6
: mins for copying the AMI to another region.
f*e
61 楼
Yes it's aws autoscaling.
Our current setup is like this:
When new code is committed, we will create a new AMI for it. The run
integration and staging tests with this newly created AMI. If it pass the
tests, then we will create a new launch config and autoscaling group with
this AMI, then associate the asg with the elb, and scaling down the previous
asg.
Almost half of the time is spent on ami creating and copying.
The reason to use ami is that when a new instance is launched, it is ready
for the service without the need to be provisioned.
With docker I guess we can just create a container for each commit, which is
much faster than baking an AMI. Then when an instance is launched, it will
download the latest container and run it.
【在 d*******r 的大作中提到】
: 我觉得他说的 autoScaling 是广义上的 scale out,不一定是 aws 那个 specific 服
: 务吧
Our current setup is like this:
When new code is committed, we will create a new AMI for it. The run
integration and staging tests with this newly created AMI. If it pass the
tests, then we will create a new launch config and autoscaling group with
this AMI, then associate the asg with the elb, and scaling down the previous
asg.
Almost half of the time is spent on ami creating and copying.
The reason to use ami is that when a new instance is launched, it is ready
for the service without the need to be provisioned.
With docker I guess we can just create a container for each commit, which is
much faster than baking an AMI. Then when an instance is launched, it will
download the latest container and run it.
【在 d*******r 的大作中提到】
: 我觉得他说的 autoScaling 是广义上的 scale out,不一定是 aws 那个 specific 服
: 务吧
d*r
63 楼
make sense
previous
【在 f*****e 的大作中提到】
: Yes it's aws autoscaling.
: Our current setup is like this:
: When new code is committed, we will create a new AMI for it. The run
: integration and staging tests with this newly created AMI. If it pass the
: tests, then we will create a new launch config and autoscaling group with
: this AMI, then associate the asg with the elb, and scaling down the previous
: asg.
: Almost half of the time is spent on ami creating and copying.
: The reason to use ami is that when a new instance is launched, it is ready
: for the service without the need to be provisioned.
previous
【在 f*****e 的大作中提到】
: Yes it's aws autoscaling.
: Our current setup is like this:
: When new code is committed, we will create a new AMI for it. The run
: integration and staging tests with this newly created AMI. If it pass the
: tests, then we will create a new launch config and autoscaling group with
: this AMI, then associate the asg with the elb, and scaling down the previous
: asg.
: Almost half of the time is spent on ami creating and copying.
: The reason to use ami is that when a new instance is launched, it is ready
: for the service without the need to be provisioned.
g*g
64 楼
No, you shouldn't make a hotfix in minutes. If you rush to fix a bad
situation, you may run into disaster. And you may find the root cause in
hours, or days, not in minutes to begin with, you shouldn't release it
without QA testing either. Shedding a few min in deployment doesn't help.
What you should do though, is roll back the cluster to previous AMI, which
can be safely done in minutes. That's why we are religious about SOA, back-
compatiblity, small and incremental releases. In a rainy day, we may be
removing a newly developed feature, but never have to fix a showstopper
while the customers are mad.
a
【在 f*****e 的大作中提到】
: But it's significant for CD. For example, a hot fix should be deployed in a
: few minutes, instead of over 40 minutes.
situation, you may run into disaster. And you may find the root cause in
hours, or days, not in minutes to begin with, you shouldn't release it
without QA testing either. Shedding a few min in deployment doesn't help.
What you should do though, is roll back the cluster to previous AMI, which
can be safely done in minutes. That's why we are religious about SOA, back-
compatiblity, small and incremental releases. In a rainy day, we may be
removing a newly developed feature, but never have to fix a showstopper
while the customers are mad.
a
【在 f*****e 的大作中提到】
: But it's significant for CD. For example, a hot fix should be deployed in a
: few minutes, instead of over 40 minutes.
P*i
65 楼
感觉这种情况下docker没啥大用
docker主要是within instance的isolation和scaling,和aws autoscaling是两回事
previous
【在 f*****e 的大作中提到】
: Yes it's aws autoscaling.
: Our current setup is like this:
: When new code is committed, we will create a new AMI for it. The run
: integration and staging tests with this newly created AMI. If it pass the
: tests, then we will create a new launch config and autoscaling group with
: this AMI, then associate the asg with the elb, and scaling down the previous
: asg.
: Almost half of the time is spent on ami creating and copying.
: The reason to use ami is that when a new instance is launched, it is ready
: for the service without the need to be provisioned.
docker主要是within instance的isolation和scaling,和aws autoscaling是两回事
previous
【在 f*****e 的大作中提到】
: Yes it's aws autoscaling.
: Our current setup is like this:
: When new code is committed, we will create a new AMI for it. The run
: integration and staging tests with this newly created AMI. If it pass the
: tests, then we will create a new launch config and autoscaling group with
: this AMI, then associate the asg with the elb, and scaling down the previous
: asg.
: Almost half of the time is spent on ami creating and copying.
: The reason to use ami is that when a new instance is launched, it is ready
: for the service without the need to be provisioned.
g*g
67 楼
No, you always want to have something to roll back to. And if your cluster
has more than a couple of instances, changing a file on all instances are
slower and more error-prone than baking an AMI and deploy it concurrently to
N instances.
image
【在 P****i 的大作中提到】
: 差别很大,比如你只修改一个script或者加一个用户,完全没有必要重新做一个image
has more than a couple of instances, changing a file on all instances are
slower and more error-prone than baking an AMI and deploy it concurrently to
N instances.
image
【在 P****i 的大作中提到】
: 差别很大,比如你只修改一个script或者加一个用户,完全没有必要重新做一个image
c*o
68 楼
最近做devops,我觉得docker 这个没那么神。
是个很好的补充,但是替代不了现在的chef/puppet啥的。
主要是只解决了deploy的问题,没解决maintenance/modification的问题。
是个很好的补充,但是替代不了现在的chef/puppet啥的。
主要是只解决了deploy的问题,没解决maintenance/modification的问题。
N*m
70 楼
f*e
71 楼
I agree that most of time rolling back is the right choice. But our devs
argues that sometimes they want to roll forward, such as when the database
schemas are changed at the same time.
We are not in production yet, so all the debates are based on imagination.
We plan to do true CI/CD: when a change is merged to the master, a jenkins
pipeline will kick off to bake a new AMI and run tests, and the change will
be deployed to production without any human intervention within an hour.
And the devs said an hour for the pipeline is too long, they want 30 minutes
. That's why docker may help.
【在 g*****g 的大作中提到】
: No, you shouldn't make a hotfix in minutes. If you rush to fix a bad
: situation, you may run into disaster. And you may find the root cause in
: hours, or days, not in minutes to begin with, you shouldn't release it
: without QA testing either. Shedding a few min in deployment doesn't help.
: What you should do though, is roll back the cluster to previous AMI, which
: can be safely done in minutes. That's why we are religious about SOA, back-
: compatiblity, small and incremental releases. In a rainy day, we may be
: removing a newly developed feature, but never have to fix a showstopper
: while the customers are mad.
:
argues that sometimes they want to roll forward, such as when the database
schemas are changed at the same time.
We are not in production yet, so all the debates are based on imagination.
We plan to do true CI/CD: when a change is merged to the master, a jenkins
pipeline will kick off to bake a new AMI and run tests, and the change will
be deployed to production without any human intervention within an hour.
And the devs said an hour for the pipeline is too long, they want 30 minutes
. That's why docker may help.
【在 g*****g 的大作中提到】
: No, you shouldn't make a hotfix in minutes. If you rush to fix a bad
: situation, you may run into disaster. And you may find the root cause in
: hours, or days, not in minutes to begin with, you shouldn't release it
: without QA testing either. Shedding a few min in deployment doesn't help.
: What you should do though, is roll back the cluster to previous AMI, which
: can be safely done in minutes. That's why we are religious about SOA, back-
: compatiblity, small and incremental releases. In a rainy day, we may be
: removing a newly developed feature, but never have to fix a showstopper
: while the customers are mad.
:
N*m
72 楼
你如果用puppet/chef,这些都不是问题
will
minutes
【在 f*****e 的大作中提到】
: I agree that most of time rolling back is the right choice. But our devs
: argues that sometimes they want to roll forward, such as when the database
: schemas are changed at the same time.
: We are not in production yet, so all the debates are based on imagination.
: We plan to do true CI/CD: when a change is merged to the master, a jenkins
: pipeline will kick off to bake a new AMI and run tests, and the change will
: be deployed to production without any human intervention within an hour.
: And the devs said an hour for the pipeline is too long, they want 30 minutes
: . That's why docker may help.
will
minutes
【在 f*****e 的大作中提到】
: I agree that most of time rolling back is the right choice. But our devs
: argues that sometimes they want to roll forward, such as when the database
: schemas are changed at the same time.
: We are not in production yet, so all the debates are based on imagination.
: We plan to do true CI/CD: when a change is merged to the master, a jenkins
: pipeline will kick off to bake a new AMI and run tests, and the change will
: be deployed to production without any human intervention within an hour.
: And the devs said an hour for the pipeline is too long, they want 30 minutes
: . That's why docker may help.
f*e
73 楼
You can roll back with docker too.
From my understanding, docker vs. vm is like git vs. cvs. branching in git
is lightweight because you only record the difference, while branching in
cvs takes a few hours for a large project because you make a full copy.
So creating a new docker container takes seconds, because you just save the
difference from the old one it bases on. While for AMI you need to bake the
whole thing every time.
to
【在 g*****g 的大作中提到】
: No, you always want to have something to roll back to. And if your cluster
: has more than a couple of instances, changing a file on all instances are
: slower and more error-prone than baking an AMI and deploy it concurrently to
: N instances.
:
: image
From my understanding, docker vs. vm is like git vs. cvs. branching in git
is lightweight because you only record the difference, while branching in
cvs takes a few hours for a large project because you make a full copy.
So creating a new docker container takes seconds, because you just save the
difference from the old one it bases on. While for AMI you need to bake the
whole thing every time.
to
【在 g*****g 的大作中提到】
: No, you always want to have something to roll back to. And if your cluster
: has more than a couple of instances, changing a file on all instances are
: slower and more error-prone than baking an AMI and deploy it concurrently to
: N instances.
:
: image
f*e
76 楼
This guy share the same view.
http://blog.scoutapp.com/articles/2013/08/28/docker-git-for-dep
git
the
the
【在 f*****e 的大作中提到】
: You can roll back with docker too.
: From my understanding, docker vs. vm is like git vs. cvs. branching in git
: is lightweight because you only record the difference, while branching in
: cvs takes a few hours for a large project because you make a full copy.
: So creating a new docker container takes seconds, because you just save the
: difference from the old one it bases on. While for AMI you need to bake the
: whole thing every time.
:
: to
http://blog.scoutapp.com/articles/2013/08/28/docker-git-for-dep
git
the
the
【在 f*****e 的大作中提到】
: You can roll back with docker too.
: From my understanding, docker vs. vm is like git vs. cvs. branching in git
: is lightweight because you only record the difference, while branching in
: cvs takes a few hours for a large project because you make a full copy.
: So creating a new docker container takes seconds, because you just save the
: difference from the old one it bases on. While for AMI you need to bake the
: whole thing every time.
:
: to
f*e
77 楼
I only did limited research on this. From the ansible side, they said you
can just treat docker as vm, and run ansible playbooks against the docker
process.
But there is also DockerFile, which is in direct competition with the
playbooks. I don't know the best approach.
【在 d*******r 的大作中提到】
: 能说说 ansible 跟 docker 合起来用,哪些互补,哪些冲突吗
can just treat docker as vm, and run ansible playbooks against the docker
process.
But there is also DockerFile, which is in direct competition with the
playbooks. I don't know the best approach.
【在 d*******r 的大作中提到】
: 能说说 ansible 跟 docker 合起来用,哪些互补,哪些冲突吗
d*r
78 楼
那就是互相不配合了...
我发觉吧,我最后反而是直接用 image 和 rsync 最多,经常是设置几个装新的 code/
cfg 的hosts,然后 new instances 从 image create 出来后,自动去 pull 新的
code/cfg.
这样也不需要一点修改,就重新 build image.
我这类土方法最大问题在哪里?
【在 f*****e 的大作中提到】
: I only did limited research on this. From the ansible side, they said you
: can just treat docker as vm, and run ansible playbooks against the docker
: process.
: But there is also DockerFile, which is in direct competition with the
: playbooks. I don't know the best approach.
我发觉吧,我最后反而是直接用 image 和 rsync 最多,经常是设置几个装新的 code/
cfg 的hosts,然后 new instances 从 image create 出来后,自动去 pull 新的
code/cfg.
这样也不需要一点修改,就重新 build image.
我这类土方法最大问题在哪里?
【在 f*****e 的大作中提到】
: I only did limited research on this. From the ansible side, they said you
: can just treat docker as vm, and run ansible playbooks against the docker
: process.
: But there is also DockerFile, which is in direct competition with the
: playbooks. I don't know the best approach.
d*r
80 楼
一直在琢磨这些 devops tool 最大的用处
f*e
81 楼
That's another approach we are considering. The only problem is that the
instance will not be in service immediately.
code/
【在 d*******r 的大作中提到】
: 那就是互相不配合了...
: 我发觉吧,我最后反而是直接用 image 和 rsync 最多,经常是设置几个装新的 code/
: cfg 的hosts,然后 new instances 从 image create 出来后,自动去 pull 新的
: code/cfg.
: 这样也不需要一点修改,就重新 build image.
: 我这类土方法最大问题在哪里?
instance will not be in service immediately.
code/
【在 d*******r 的大作中提到】
: 那就是互相不配合了...
: 我发觉吧,我最后反而是直接用 image 和 rsync 最多,经常是设置几个装新的 code/
: cfg 的hosts,然后 new instances 从 image create 出来后,自动去 pull 新的
: code/cfg.
: 这样也不需要一点修改,就重新 build image.
: 我这类土方法最大问题在哪里?
d*r
86 楼
我就是一直觉得,没有这些 devops tools 貌似也行,特别是 chef 这种复杂得要死的.
ansible 这种远程批量执行命令的,貌似还方便一用。
docker user 能说几种只有 docker 用起来才超级方便,
其他 devops tools 都不行的 usage scenarios 吗. 学习一下.
to
【在 f*****e 的大作中提到】
: same here. I have been a developer for 10+ years, and only work on devops
: for the last few months. I find out devops is very interesting and a lot to
: be learned.
ansible 这种远程批量执行命令的,貌似还方便一用。
docker user 能说几种只有 docker 用起来才超级方便,
其他 devops tools 都不行的 usage scenarios 吗. 学习一下.
to
【在 f*****e 的大作中提到】
: same here. I have been a developer for 10+ years, and only work on devops
: for the last few months. I find out devops is very interesting and a lot to
: be learned.
w*g
94 楼
docker是java写的吗?按本版的规矩,要不是java写的就可以去死了。
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
【在 c****e 的大作中提到】
: http://www.pcworld.com/article/2159100/google-cloud-previews-do
c*e
101 楼
你们这些人还在这里纠结 docker 与 puppet 的区别,看大牛说的:
When small changes are needed to your servers, you can just use your CM tool
to manage those changes. Over time the images will diverge from your
current server configurations, so periodically you would create new server
images to keep them closer aligned.
This is a variant of the Golden Image pattern that allows you to have the
speed of using images, but helps you avoid the tedious image re-creation
problem for small changes.
【在 N*****m 的大作中提到】
: 这不就是puppet/chef么,干嘛重新做轮子
When small changes are needed to your servers, you can just use your CM tool
to manage those changes. Over time the images will diverge from your
current server configurations, so periodically you would create new server
images to keep them closer aligned.
This is a variant of the Golden Image pattern that allows you to have the
speed of using images, but helps you avoid the tedious image re-creation
problem for small changes.
【在 N*****m 的大作中提到】
: 这不就是puppet/chef么,干嘛重新做轮子
N*m
102 楼
那你能详细说说楼上的案例吗?
docker怎么和aws autoscaling一起用,如果有新的部署,docker怎么把更新或者rollb
ack不理想的部署?
tool
【在 c*****e 的大作中提到】
: 你们这些人还在这里纠结 docker 与 puppet 的区别,看大牛说的:
: When small changes are needed to your servers, you can just use your CM tool
: to manage those changes. Over time the images will diverge from your
: current server configurations, so periodically you would create new server
: images to keep them closer aligned.
: This is a variant of the Golden Image pattern that allows you to have the
: speed of using images, but helps you avoid the tedious image re-creation
: problem for small changes.
docker怎么和aws autoscaling一起用,如果有新的部署,docker怎么把更新或者rollb
ack不理想的部署?
tool
【在 c*****e 的大作中提到】
: 你们这些人还在这里纠结 docker 与 puppet 的区别,看大牛说的:
: When small changes are needed to your servers, you can just use your CM tool
: to manage those changes. Over time the images will diverge from your
: current server configurations, so periodically you would create new server
: images to keep them closer aligned.
: This is a variant of the Golden Image pattern that allows you to have the
: speed of using images, but helps you avoid the tedious image re-creation
: problem for small changes.
c*e
103 楼
http://contino.co.uk/use-docker-continuous-delivery-part-2/
rollb
【在 N*****m 的大作中提到】
: 那你能详细说说楼上的案例吗?
: docker怎么和aws autoscaling一起用,如果有新的部署,docker怎么把更新或者rollb
: ack不理想的部署?
:
: tool
rollb
【在 N*****m 的大作中提到】
: 那你能详细说说楼上的案例吗?
: docker怎么和aws autoscaling一起用,如果有新的部署,docker怎么把更新或者rollb
: ack不理想的部署?
:
: tool
N*m
104 楼
谢谢
不过,看了一下,还是要用ansible么
【在 c*****e 的大作中提到】
: http://contino.co.uk/use-docker-continuous-delivery-part-2/
:
: rollb
不过,看了一下,还是要用ansible么
【在 c*****e 的大作中提到】
: http://contino.co.uk/use-docker-continuous-delivery-part-2/
:
: rollb
d*r
105 楼
同求 devops 高手讲个 convincing docker 的例子呀
c*e
106 楼
当然, coreos 用 fleet
http://coreos.com/docs/launching-containers/launching/launching
【在 N*****m 的大作中提到】
: 谢谢
: 不过,看了一下,还是要用ansible么
http://coreos.com/docs/launching-containers/launching/launching
【在 N*****m 的大作中提到】
: 谢谢
: 不过,看了一下,还是要用ansible么
相关阅读
sun 的倒掉看到一个数据,惊到了吵架的都是装逼的今天很开心!给大家点开心的话题The best app!! (转载)今晚收获很大程序员有哪些借口可以让自己写出低质量的代码?mitbbs 封 goodbug 在 Programming 版 (转载)请教一个占矿游戏的策略问题。 (转载)我发现你们居然可以翻回一两年前的帖子我戒网去了现在最高端的工作应该是智能代码分析了吧?我很好奇,这的人跟老婆都是怎么相处的这个版的版主是罪魁祸首一帮破码工吵啥吵呀?TeacherWei 和goodbug 问题好解决碰到一个非常奇怪的情况。技术板块还是回到技术话题上来吧,我先抛个砖头东邪:goodbug;西毒:TeacherWei;北丐:zhaoce请教express create session的问题