1. 程式人生 > >Kubernetes 1.5.1 部署

Kubernetes 1.5.1 部署

> kubernetes 1.5.0 , 配置文件


# 1 初始化環境


## 1.1 環境:

| 節 點  |      I P      |
|--------|-------------|
|node-1|10.6.0.140|
|node-2|10.6.0.187|
|node-3|10.6.0.188|


## 1.2 設定hostname

hostnamectl --static set-hostname hostname

|       I P     | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|


## 1.3 配置 hosts

```
vi /etc/hosts
```

|     I P       | hostname |
|-------------|-------------|
|10.6.0.140|k8s-node-1|
|10.6.0.187|k8s-node-2|
|10.6.0.188|k8s-node-3|

# 2.0 部署 kubernetes master

## 2.1 新增yum

# 使用我朋友的 yum 源,嘿嘿

cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[mritdrepo]
name=Mritd Repository
baseurl
=https://yum.mritd.me/centos/7/x86_64 enabled=1 gpgcheck=1 gpgkey=https://cdn.mritd.me/keys/rpm.public.key EOF yum makecache yum install -y socat kubelet kubeadm kubectl kubernetes-cni


## 2.2 安裝docker

wget -qO- https://get.docker.com/ | sh


systemctl enable docker
systemctl start docker


## 2.3 安裝 etcd 叢集

yum -y install etcd

# 建立etcd data 目錄

mkdir -p /opt/etcd/data

chown -R etcd:etcd /opt/etcd/


# 修改配置檔案,/etc/etcd/etcd.conf 需要修改如下引數:


ETCD_NAME=etcd1
ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd"
ETCD_LISTEN_PEER_URLS="http://10.6.0.140:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.6.0.140:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.6.0.140:2380"
ETCD_INITIAL_CLUSTER="etcd1=http://10.6.0.140:2380,etcd2=http://10.6.0.187:2380,etcd3=http://10.6.0.188:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://10.6.0.140:2379"
# 修改 etcd 啟動檔案

sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service
# 啟動 etcd

systemctl enable etcd

systemctl start etcd

systemctl status etcd


# 檢視叢集狀態

etcdctl cluster-health

## 2.4 下載映象

images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
  docker pull jicki/$imageName
  docker tag jicki/$imageName gcr.io/google_containers/$imageName
  docker rmi jicki/$imageName
done

```

```
# 如果速度很慢,可配置一下加速

docker 啟動檔案 增加 --registry-mirror="http://b438f72b.m.daocloud.io"

```


## 2.4 啟動 kubernetes

```
systemctl enable kubelet
systemctl start kubelet
```

## 2.5 建立叢集

```
kubeadm init --api-advertise-addresses=10.6.0.140 \
--external-etcd-endpoints=http://10.6.0.140:2379,http://10.6.0.187:2379,http://10.6.0.188:2379 \
--use-kubernetes-version v1.5.1 \
--pod-network-cidr 10.244.0.0/16

```

```
Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "c53ef2.d257d49589d634f0"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 15.299235 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 1.002937 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 2.502881 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140


```

## 2.6 記錄 token
 

You can now join any number of machines by running the following on each node:

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140

## 2.7 配置網路

```
# 建議先下載映象,否則容易下載不到

docker pull quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64

# 或者這樣

docker pull jicki/flannel-git:v0.6.1-28-g5dde68d-amd64
docker tag jicki/flannel-git:v0.6.1-28-g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64
docker rmi jicki/flannel-git:v0.6.1-28-g5dde68d-amd64


```


```
# http://kubernetes.io/docs/admin/addons/  這裡有多種網路模式,選擇一種

# 這裡選擇 Flannel  選擇 Flannel  init 時必須配置 --pod-network-cidr

kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

```

## 2.8 檢查 kubelet 狀態

systemctl status kubelet


# 3.0 部署 kubernetes node


## 3.1 安裝docker

```
wget -qO- https://get.docker.com/ | sh


systemctl enable docker
systemctl start docker
```

## 3.2 下載映象

```
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)
for imageName in ${images[@]} ; do
  docker pull jicki/$imageName
  docker tag jicki/$imageName gcr.io/google_containers/$imageName
  docker rmi jicki/$imageName
done

```


## 3.3 啟動 kubernetes

systemctl enable kubelet
systemctl start kubelet


## 3.4 加入叢集

kubeadm join --token=c53ef2.d257d49589d634f0 10.6.0.140
Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

## 3.5 檢視叢集狀態

[[email protected]1 ~]#kubectl get node
NAME         STATUS         AGE
k8s-node-1   Ready,master   27m
k8s-node-2   Ready          6s
k8s-node-3   Ready          9s


## 3.6 檢視服務狀態

[[email protected]1 ~]#kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-qrp68               1/1       Running   1          1h
kube-system   kube-apiserver-k8s-node-1            1/1       Running   2          1h
kube-system   kube-controller-manager-k8s-node-1   1/1       Running   2          1h
kube-system   kube-discovery-1769846148-g2lpc      1/1       Running   1          1h
kube-system   kube-dns-2924299975-xbhv4            4/4       Running   3          1h
kube-system   kube-flannel-ds-39g5n                2/2       Running   2          1h
kube-system   kube-flannel-ds-dwc82                2/2       Running   2          1h
kube-system   kube-flannel-ds-qpkm0                2/2       Running   2          1h
kube-system   kube-proxy-16c50                     1/1       Running   2          1h
kube-system   kube-proxy-5rkc8                     1/1       Running   2          1h
kube-system   kube-proxy-xwrq0                     1/1       Running   2          1h
kube-system   kube-scheduler-k8s-node-1            1/1       Running   2          1h


# 4.0 設定 kubernetes

## 4.1 其他主機控制叢集

```
# 備份master節點的 配置檔案

/etc/kubernetes/admin.conf

# 儲存至 其他電腦, 通過執行配置檔案控制叢集

kubectl --kubeconfig ./admin.conf get nodes

```


## 4.2 配置dashboard

```
#下載 yaml 檔案, 直接匯入會去官方拉取images

curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml


#編輯 yaml 檔案

vi kubernetes-dashboard.yaml

image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0

修改為

image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0


imagePullPolicy: Always

修改為

imagePullPolicy: IfNotPresent

```


```
kubectl create -f ./kubernetes-dashboard.yaml

deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
```


```
# 檢視 NodePort ,既外網訪問埠

kubectl describe svc kubernetes-dashboard --namespace=kube-system

NodePort:               <unset> 31736/TCP

```


```
# 訪問 dashboard

http://10.6.0.140:31736

```


# 5.0 kubernetes 應用部署


## 5.1 部署一個 nginx rc


> 編寫 一個 nginx yaml

```
apiVersion: v1 
kind: ReplicationController 
metadata: 
  name: nginx-rc 
spec: 
  replicas: 2 
  selector: 
    name: nginx 
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: nginx:alpine
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 80
```

```
[[email protected]-node-1 ~]#kubectl get rc
NAME       DESIRED   CURRENT   READY     AGE
nginx-rc   2         2         2         2m


[[email protected]-node-1 ~]#kubectl get pod -o wide
NAME             READY     STATUS    RESTARTS   AGE       IP          NODE
nginx-rc-2s8k9   1/1       Running   0          10m       10.32.0.3   k8s-node-1
nginx-rc-s16cm   1/1       Running   0          10m       10.40.0.1   k8s-node-2

> 編寫一個 nginx service 讓叢集內部容器可以訪問 (ClusterIp)

```
apiVersion: v1 
kind: Service 
metadata: 
  name: nginx-svc 
spec: 
  ports: 
    - port: 80
      targetPort: 80
      protocol: TCP 
  selector: 
    name: nginx
```


```
[[email protected]-node-1 ~]#kubectl create -f nginx-svc.yaml 
service "nginx-svc" created


[[email protected]-node-1 ~]#kubectl get svc -o wide
NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE       SELECTOR
kubernetes   10.0.0.1      <none>        443/TCP   2d        <none>
nginx-svc    10.6.164.79   <none>        80/TCP    29s       name=nginx

```



> 編寫一個 curl 的pods

```
apiVersion: v1
kind: Pod
metadata:
  name: curl
spec:
  containers:
  - name: curl
    image: radial/busyboxplus:curl
    command:
    - sh
    - -c
    - while true; do sleep 1; done
```


```
# 測試pods 內部通訊
[[email protected]-node-1 ~]#kubectl exec curl curl nginx
```



```
# 在任何node節點中,可使用ip訪問

[[email protected]-node-2 ~]# curl 10.6.164.79
[[email protected]-node-3 ~]# curl 10.6.164.79

```

> 編寫一個 nginx service 讓外部可以訪問 (NodePort)

```
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc-node
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
```


```
[[email protected]-node-1 ~]#kubectl get svc -o wide
NAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE       SELECTOR
kubernetes       10.0.0.1       <none>        443/TCP   2d        <none>
nginx-svc        10.6.164.79    <none>        80/TCP    29m       name=nginx
nginx-svc-node   10.12.95.227   <nodes>       80/TCP    17s       name=nginx


[[email protected]-node-1 ~]#kubectl describe svc nginx-svc-node |grep NodePort
Type:                   NodePort
NodePort:               <unset> 32669/TCP
```



```
# 使用 ALL node節點物理IP + 埠訪問

http://10.6.0.140:32669

http://10.6.0.187:32669

http://10.6.0.188:32669
```


## 5.2 部署一個 zookeeper 叢集


> 編寫 一個 zookeeper-cluster.yaml

```
apiVersion: extensions/v1beta1
kind: Deployment 
metadata: 
  name: zookeeper-1
spec: 
  replicas: 1
  template: 
    metadata: 
      labels: 
        name: zookeeper-1 
    spec: 
      containers: 
        - name: zookeeper-1
          image: zk:alpine 
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "1"
          - name: NODES
            value: "0.0.0.0,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 2181

---

apiVersion: extensions/v1beta1 
kind: Deployment
metadata:
  name: zookeeper-2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: zookeeper-2
    spec:
      containers:
        - name: zookeeper-2
          image: zk:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "2"
          - name: NODES
            value: "zookeeper-1,0.0.0.0,zookeeper-3"
          ports:
          - containerPort: 2181

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: zookeeper-3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: zookeeper-3
    spec:
      containers:
        - name: zookeeper-3
          image: zk:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "3"
          - name: NODES
            value: "zookeeper-1,zookeeper-2,0.0.0.0"
          ports:
          - containerPort: 2181
---

apiVersion: v1 
kind: Service 
metadata: 
  name: zookeeper-1 
  labels:
    name: zookeeper-1
spec: 
  ports: 
    - name: client
      port: 2181
      protocol: TCP
    - name: followers
      port: 2888
      protocol: TCP
    - name: election
      port: 3888
      protocol: TCP
  selector: 
    name: zookeeper-1

---

apiVersion: v1 
kind: Service 
metadata: 
  name: zookeeper-2
  labels:
    name: zookeeper-2
spec: 
  ports: 
    - name: client
      port: 2181
      protocol: TCP
    - name: followers
      port: 2888
      protocol: TCP
    - name: election
      port: 3888
      protocol: TCP
  selector: 
    name: zookeeper-2

---

apiVersion: v1 
kind: Service 
metadata: 
  name: zookeeper-3
  labels:
    name: zookeeper-3
spec: 
  ports: 
    - name: client
      port: 2181
      protocol: TCP
    - name: followers
      port: 2888
      protocol: TCP
    - name: election
      port: 3888
      protocol: TCP
  selector: 
    name: zookeeper-3
    
```


```
[[email protected]-node-1 ~]#kubectl create -f zookeeper-cluster.yaml --record



[[email protected]-node-1 ~]#kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP          NODE
zookeeper-1-2149121414-cfyt4   1/1       Running   0          51m       10.32.0.3   k8s-node-2
zookeeper-2-2653289864-0bxee   1/1       Running   0          51m       10.40.0.1   k8s-node-3
zookeeper-3-3158769034-5csqy   1/1       Running   0          51m       10.40.0.2   k8s-node-3


[[email protected]-node-1 ~]#kubectl get deployment -o wide    
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
zookeeper-1   1         1         1            1           51m
zookeeper-2   1         1         1            1           51m
zookeeper-3   1         1         1            1           51m


[[email protected]-node-1 ~]#kubectl get svc -o wide
NAME          CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE       SELECTOR
zookeeper-1   10.8.111.19    <none>        2181/TCP,2888/TCP,3888/TCP   51m       name=zookeeper-1
zookeeper-2   10.6.10.124    <none>        2181/TCP,2888/TCP,3888/TCP   51m       name=zookeeper-2
zookeeper-3   10.0.146.143   <none>        2181/TCP,2888/TCP,3888/TCP   51m       name=zookeeper-3

## 5.3 部署一個 kafka 叢集


> 編寫 一個 kafka-cluster.yaml

```

apiVersion: extensions/v1beta1
kind: Deployment 
metadata: 
  name: kafka-deployment-1
spec: 
  replicas: 1
  template: 
    metadata: 
      labels: 
        name: kafka-1 
    spec: 
      containers: 
        - name: kafka-1
          image: kafka:alpine 
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "1"
          - name: ZK_NODES
            value: "zookeeper-1,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 9092

---

apiVersion: extensions/v1beta1 
kind: Deployment
metadata:
  name: kafka-deployment-2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kafka-2  
    spec:
      containers:
        - name: kafka-2
          image: kafka:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "2"
          - name: ZK_NODES
            value: "zookeeper-1,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 9092

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kafka-deployment-3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kafka-3  
    spec:
      containers:
        - name: kafka-3
          image: kafka:alpine
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_ID
            value: "3"
          - name: ZK_NODES
            value: "zookeeper-1,zookeeper-2,zookeeper-3"
          ports:
          - containerPort: 9092

---

apiVersion: v1 
kind: Service 
metadata: 
  name: kafka-1 
  labels:
    name: kafka-1
spec: 
  ports: 
    - name: client
      port: 9092
      protocol: TCP
  selector: 
    name: kafka-1

---

apiVersion: v1 
kind: Service 
metadata: 
  name: kafka-2
  labels:
    name: kafka-2
spec: 
  ports: 
    - name: client
      port: 9092
      protocol: TCP
  selector: 
    name: kafka-2

---

apiVersion: v1 
kind: Service 
metadata: 
  name: kafka-3
  labels:
    name: kafka-3
spec: 
  ports: 
    - name: client
      port: 9092
      protocol: TCP
  selector: 
    name: kafka-3

```


# FAQ:


## kube-discovery error

    failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]
    

kubeadm reset

kubeadm init

相關推薦

Kubernetes 1.5.1 部署

> kubernetes 1.5.0 , 配置文件 # 1 初始化環境 ## 1.1 環境: | 節 點  |      I P      ||--------|-------------||node-1|10.6.0.140||node-2|10.6.0.187||node-3|10.6.0.188

spark 1 5 1 叢集部署

實驗環境 作業系統:ubuntu 14.04 64位 主機名 IP Master 10.107.12.10 Worker1 10.

用for和while循環求e的值[e=1+1/1!+1/2!+1/3!+1/4!+1/5!+...+1/n!]

主函數 int class urn log emp art print tracking /*編敲代碼,依據下面公式求e的值。要求用兩種方法計算: 1)for循環。計算前50項 2)while循環,直至最後一項的值小於10-4 e=1+1/1!+1/2!+1/

11.8 1.5.1開發結束

account blog 如果 alert show studio put row 默認 1.5.1合並了訂單和充值單。 最大的變化時創建/編輯充值單,不再關聯訂單,而是關聯合同,也不再從訂單取相關信息,改為用戶自行選擇。這邊最讓我糾結的是下單產品和下單賬戶的聯動。 在這塊

計算1-1/3+1/5-1/7+···的前n項和

分享圖片 導致 一個 nom color img 變量 表達 http 這圖1為書裏的教材,圖二為自己打的程序 (1)二者相比,自己寫的代碼顯得更短,聽說代碼寫的越精簡越好,但是自己的較難分析,他人看來可能會較難理解一點;(自己在第一次運行時將for()中的第二個表達式寫成

報錯:未能加載文件或程序集“WebGrease, Version=1.5.1.25624, Culture=neutral, Publ

技術分享 運行 文件 web 某個版本 ase 分享 pack neu 通過NuGet安裝某程序包後,運行程序出現如上錯誤。 可能是程序集版本不兼容引起的,可以通過NuGet先把程序包刪除,然後再安裝最新或某個版本的程序包。 通過"uninstall-package -f

Algs4-1.5.2使用quick-union算法完成練習1.5.1

int() width fin out http detail vat web -- 1.5.2使用quick-union算法(請見1.5.2.3節代碼框)完成練習1.5.1。另外,在處理完輸入的每對整數之後畫出id[]數組表示的森林。答:public class UF

Algs4-1.5.1使用quick-find算法處理序列

str [] 5.1 n) qup 分享圖片 rgs class gad 1.5.1使用quick-find算法處理序列9-0 3-4 5-8 7-2 2-1 5-7 0-3 4-2。對於輸入的每一對整數,給出id[]數組的內容和訪問數組的次數。答:public class

Python: pyHook-1.5.1-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform.

     pyHook-1.5.1-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform. 安裝pyhook的時候報錯 下載了pyHook-1.5.1-cp37-cp37m-

計算1/1-1/2+1/3-1/4+1/5+……+1/99-1/100的值

看到這個式子首先總結規律: 1.每一項都是分數 2.分子都為1,分母依次遞增至100 3.分母的奇數項為正,偶數項為負 思路: 1.定義一個sum 2.建立一個迴圈,再判斷是否是奇數,如果是奇數往sum上加,否則(偶數)往sum上減 3.列印sum 程式碼如下: #include <s

1.5-1.6-1.7-1.8-線性時不變LTI系統

線性時不變系統 本文引自《數字訊號處理 【美】 Richad G. Lyons》 眾所周知,LTI系統官方定義。 線性系統的例子: 假設 y(n)= - x(n) / 2 輸出序列是輸入序列取反後的1/2 輸入x1 : 1Hz -------------------輸出:

Android Studio 1.5.1更新說明與下載地址

網站被牆原文奉上,就不做翻譯了,原諒我書讀的少 Android Studio 1.5.1 Available posted Dec 3, 2015, 3:12 PM by Tor Norbye   [ updated

AdGuard for Mac 1.5.1 破解版 — 廣告攔截工具

軟體介紹 AdGuard for Mac 是世界上第一個專門設計給 macOS 的獨立廣告攔截工具。AdGuard提供了比任何瀏覽器擴充套件還多的功能:攔截各種瀏覽器和應用內的廣告,保護使用者的隱私。 AdGuard for Mac 1.5.1 破解版 點選下載AdGuard for Mac 1.5.1

C語言——兩種方法計算1/1-1/2+1/3-1/4+1/5 …… + 1/99 - 1/100 的值

方法一:首先我們先觀察這個數學式子的規律,可以發現奇數項均為正數,偶數項均為負數。則我們可以利用條件語句if來判斷奇偶,最後分別對奇數項和偶數項求和。 原始碼: #include<stdio.h> #include<stdlib.h> int main() {

pow函式(數學次方)在c語言的用法,兩種編寫方法例項( 計算1/1-1/2+1/3-1/4+1/5 …… + 1/99 - 1/100 的值)

關於c語言裡面pow函式,下面借鑑了某位博主的一篇文章: 標頭檔案:#include <math.h> pow() 函式用來求 x 的 y 次冪(次方),x、y及函式值都是double型 ,其原型為:    double pow(double x, double y

計算1/1-1/2+1/3-1/4+1/5······+1/99-1/100的值

#include <stdio.h> #include <stdlib.h> void main() { int i, n; double num = 0.0, sum = 0.0; for (i = 1; i <= 100;i++ ) { if (i%2=

1.計算1/1-1/2+1/3-1/4+1/5 …… + 1/99 - 1/100 的值2.實現陣列中值的交換

#include <stdio.h> #include<math.h> int main(){ int i=0; for (i = 0; i < 101; i++){ if (i % 2 == 0){ printf("%d\n",

第一章 計算機系統漫遊(1.5-1.7)

1.5快取記憶體 ☆原理:較大的儲存裝置要比較小的儲存裝置執行的慢,而快速裝置的造價遠高於同類的低速裝置。 ☆舉例:暫存器檔案儲存幾百位元組的資訊,而主存可以存放幾十億位元組。然而,處理器從暫存器檔案中讀一個字的時間開銷要比主存中讀取要快100倍。 ☆措施:針對

計算1/1-1/2+1/3-1/4+1/5 …… + 1/99

分析: 將式子拆分成兩部分:正數相加部分、負數相加部分。利用for迴圈分別求得第一部分和第二部分之和,再將兩個和相減。 #include <stdio.h> #include <stdlib.h> double add(double a, dou