From 513aef0bae0ca6667e5f18507392df9ee6fd048e Mon Sep 17 00:00:00 2001 From: mviitanen Date: Mon, 8 Mar 2021 16:13:35 +0200 Subject: [PATCH 1/2] New install doc structure Install documentation structure changed and aligned. Minor changes in the content. Signed-off-by: mviitanen --- docs/configuration.md | 1 + docs/high-availability.md | 23 +++++ docs/img/k0sctl_deployment.png | Bin 0 -> 18573 bytes docs/install.md | 161 ++++++++++------------------- docs/k0s-in-docker.md | 49 +++++++-- docs/k0s-multi-node.md | 180 +++++++++++++++------------------ docs/k0sctl-install.md | 44 ++++++-- docs/shell-completion.md | 38 +++++++ docs/user-management.md | 22 ++++ mkdocs.yml | 23 +++-- 10 files changed, 305 insertions(+), 236 deletions(-) create mode 100644 docs/high-availability.md create mode 100644 docs/img/k0sctl_deployment.png create mode 100644 docs/shell-completion.md create mode 100644 docs/user-management.md diff --git a/docs/configuration.md b/docs/configuration.md index 2eab2b2d1656..674ed0b452fb 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -249,6 +249,7 @@ extensions: This way you get a declarative way to configure the cluster and k0s controller manages the setup of the defined extension Helm charts as part of the cluster bootstrap process. Some examples what you could use as extension charts: + - Ingress controllers: [Nginx ingress](https://github.com/helm/charts/tree/master/stable/nginx-ingress), [Traefix ingress](https://github.com/traefik/traefik-helm-chart) ([tutorial](examples/traefik-ingress.md)), - Volume storage providers: [OpenEBS](https://openebs.github.io/charts/), [Rook](https://github.com/rook/rook/blob/master/Documentation/helm-operator.md), [Longhorn](https://longhorn.io/docs/0.8.1/deploy/install/install-with-helm/) - Monitoring: [Prometheus](https://github.com/prometheus-community/helm-charts/), [Grafana](https://github.com/grafana/helm-charts) diff --git a/docs/high-availability.md b/docs/high-availability.md new file mode 100644 index 000000000000..c69f5e71928f --- /dev/null +++ b/docs/high-availability.md @@ -0,0 +1,23 @@ +## Control Plane High Availability + +The following pre-requisites are required in order to configure an HA control plane: + +### Requirements +##### Load Balancer +A load balancer with a single external address should be configured as the IP gateway for the controllers. +The load balancer should allow traffic to each controller on the following ports: + +- 6443 +- 8132 +- 8133 +- 9443 + +##### Cluster configuration +On each controller node, a k0s.yaml configuration file should be configured. +The following options need to match on each node, otherwise the control plane components will end up in very unknown states: + +- `network` +- `storage`: Needless to say, one cannot create a clustered control plane with each node only storing data locally on SQLite. +- `externalAddress` + +[Full configuration file refrence](configuration.md) \ No newline at end of file diff --git a/docs/img/k0sctl_deployment.png b/docs/img/k0sctl_deployment.png new file mode 100644 index 0000000000000000000000000000000000000000..4099ac5e54b09b92f67a4e280b9384347f28d437 GIT binary patch literal 18573 zcmeIabyQU0*C;%Mba%*5gBZlXkTP@(-Q6%W!wk*PB_$~;sf1W4k^&+iLn9#|N=i#e zNF&WX=aFZ<6$N5tL+XWFQcTQbS$E5Cp;p z1K0B;M8Gj2r>O!2BEIdfYVPlab#X^IgZN;||E~B%p>7yIe?FKBpQxyVx3`d!yMwE* zgO{HW+Swl%0`7ZZoZMa9ot^&85rvA12tr|kP$^>(5k8ors1$I3iU>(dh*~`i3=9KDJtN>?4*ZEg?L@@wByRziir(HR zXESFWGH=dxco|W3YUpN`BgIUOE6-lmmS<{56!leEoe87CL5%N-#$?w3M-h z5mYT0ic#|rRfVBk43P*uJz%nthL3}(o{NQ{xth0zma_>`DcID|!Wp4qt|?|@p{D~i zGZh!rG%}L%!5B)bDd{3q0t`fSP1GgTHC+r{)qux}2Vx8?kkaN_z_q`qE&vni>5A|P z@=!Jw#h3(`D#FObf}{XpQd08~^|wSJ z+#C!-5a!C3+DcNs##$N%fk;1512;D}V*?WlzpH6}7E(t3l9(VDB{2hMLsW>Nmy@NJ zo1R*bqNlsQvxknpsv{cX6o7Wq)<%Yi8zBOfwKSD=(Gn&|NoR!W6FL>eRUTtqad`mficojMBfO5^)n7|bGA^`@>Da|*9cJbh8p;|hL}ibAWhk}}dj(oD-sT}4XKQA$bE9pw??>0k&`4he$#W1Y3dO;r@p z0a6$hlMtkVnYl7r%1lZrNZZK}8)Am_60_9N^HTC~S68u=@I)vEs%c_8q?Ggl=-xiT zfe5U(hldy1+cn74U0uW97^!ApsftiBH%4k}DtTiyQCNK~Jp)lA5ff8YJ$>T<2L}T! z4Xi1`LC06y$WvMmVd?7Pi@|z2IT||z`)ax?sT*o}XebB3v`v&HBr$ptzABEQ0scr7 zRK>x-Q^Z5Z41;!Xm2gJ*>zk=yf*gQlm>BB@nd*8t832^j^$Nr|`M67oqEKqes^}|e zR&(>x&_n5=l?~C35)LYw(&~QhCX!HT38b$xB19XZY>8Ge^s$t1l*RzSbOJ0iy^y*H zZ?w0EsE1UrsjH}^m!X9uTGueh-5hG6X^Aw%1W9O#Nje*PnHf8|2Kb3{4Fd4MS#x^0Z1>HIT8iamICaInj=O61p`z-T3rvYpGrEx z`hgff7c(zsF9%C=2VWISbC{EtgQAJKr;$I}(ag&Xr6U1z3IHs!l9;x$iw?%q*jYq5 zP}|i=Q$0{!$v4=;+0DmHT3y;v6YZxALyH&$_-c6SB7K9E0$dCY#Z5f5q=4@R1q4|D zCucED;AkYRZtmf)C$51IM>`pM=sBC4DoT4@-Eo2j1p+6iG)z(z>nQHwXkrX?5Dy3p z)c14rurLsjlyVV80v19^&B@Zy!$3z<)5zCI*F@6U-%|zHYm~fTo_gXKQ&T@3F-0eT zHbJT)#u1qfs*hkd< zr_KIrz=6;I*%o1{&OKjdKp-xVhKiyw)^^i|bX?bD>=R|UyMFjU;w~+l+|Db-Sf!E- zB1X++ZPiEWwC-G#ntVEW#%vH{1>C=c&ThC1%eR_OS)uKI)hmw58Vo8q?gz4@!SR#LS-hG<}2nR-Rza_3{nH zKz-=h!(E0=g$~|ZlNr}96%@!*X4U3BN@#bsr>M&Ifhp@Tvxi&@^Ys^o!p%%9EhZcP zxv2vGS}bo?L$l)@_xvP@5RW!p-9Gi55pDW-(V-)k*_%+XD|4@ z5VjZ{3_mk9I@GMV)`gjFoOJtnMTMU`>IoOt14>|OGxaxx>NGSviG-u$cNupQDLsUV ze4VK+nVaA-_eafNJ)?K5{*+6w0ZXVO2>wUhjP7>E%r$;GLKQeqw0NDa zD%>%4GCA6MB6jqe@HfHqgjtVX+rOj@{V${ia3<3@peM!W0YAX8S-s)`S1E3GD*WXq z^I8@_OL6CleM|eZ(dM_QYQPdU6~O8KtApPKe*PaB+6T}wui7j`5E`*Q^IeS>SVACI zI^ypBiU-r~f3x$y+4;X}Cl`95%4bn6OytriOl}KVW>wEboK=yyChqafyvp%Gzi`Xb zi>*lc#m1#R{o|oap0lm9L;A6maDH$#aedapl}@6{nK{Sec+fgp&7^oiBpnU7u$Bd@ z{f$YsksEp)LXBAzH>%eD%x=$mh(n!@Mx=_YkCpULZ6BDcBF^tV`M%nQe;~em{K`9E zMXPGA@qpyAQQ>LlcE;P(8}wl6iP%sApDX>97)woGk4a28Z4o|LC5isPltZ~bRx<34 za5#8l^1*N1=(XkfP1JZLUSZ^+h^1s{LT2MuwArf&YqvqfpR{a3CztU>VMY>ePUf8A zX%87Iq!aZ8MoI{Q4{0Qv^!To196=!Qb3e&i$zMlk}#AZ_tzJBS-(WQFcDfjrY0PNUiklHItUbG^=vzf+?SF4uc8@ z1fBPpyQhq7yC0_iNKVeHZ_kJhA8!6LAT|Hfh__zWY}R}65ZeQOCPMkwhs323Q<@!R9IKhbWpFZd(Ok2%nrV+5)Bb!|F96tg#W#^1Zs=ZU}J$S7vo{7Bdl z3wiFIeE4C0v{-u3fUmpIpDg5{k>l~3>&pttQ(JA{)#xR6GGtYdY=4dAB_J0Fu}I|F zcRYL_L8#4?I4PXfNZz{1rZhCrJ(YCPt|N4>g&IzZ+RT@18%u6!yJNIM)Pelk;gfUH z>iFn~SJ2tn2({Gm(7DeT0k6T<26@b%c7z}VjG}i5i+>t8$ts*0f-hU{mQ?jnjsjnO z0?ZM}k8YUv=s)|JkMpalWl3i>npH_A*X=$}PgR>9VLvb^~r4h{F*$G=uSgE zIuewcUmIAI+Y|uo-AVZXLv~-zo<;Qj-4G*fXm)zG?9$~p*{@LeQG}2^fXZ1P?FFuk z7U>3@y;545%ju8^(LACIo~ahcb3gjky|uGiQ?b-!=YF}c8oDm)Hu|jHGzrnZm$&d{ z9Ze;tx7Agn2WQEMd#0L9RPlhu=G&YK92UryC3NiZwqtSr6y1%x7%K|S?~CPSm54a} zUNwmP@iQG?k0gosl#l!V+lEQEg&W}N1UG?>|7VSf%NXtEy9!+8*Ia%kI(#U0b=59x zq&VGZymZNgb(QFhoM4xiN>{(Fg^5Ykla_B@nw1evod->)$Y3@fD#J(1jtTzA?S>U3 zb@O@ME%ti#Hg?c1)Y&R*|Dp^R>NpizGov2?vvx1`9bQW1i7;SC`ASCLFuF*0R1OVnMb zA<7)tNWQ>XS?>&uZ9fzA~7hjp6p0`_B)kRZ>59n3&y>tkxl0xRgHkv7X8 z{7L_fw4{nA$if5a_Q$e63ChC4zDX`{%k(7p?(G;b()1CVpSYcobg06mU|j~#f*6b2 zt`ey0PA}o|yR`7j{gPF`JsRsDZ;ZSte9ymg#FS(`)GJMxfW0D`$*?^qA|6~)gRc%_ zx2Auwh^&b}5;77Z5uyx*qqc9yY71{g$eNZP5qafFkn|85dDlPIZQO0dcq`TVzV@>W z_g@zW$%nPz*Z=S~$uNPP989pjJ-(8{sYUKkfTh#_;VJ4b+AeQrrWRNwa^>n0``GhC4E?3<3X@-G2L&@9sJwZ!x9q_ zcqoHIebA6^a0db)TVfZL3jD|78!!hq4gXFh&t+8zW0t@Poy=yFpZ+_4&!w-w42}Yi z4Ijx?!Vi!B*dFHz)>vZiwLhBh5|3~h&L*!MlRG847yF!FIJFyY7rrBXQM=f(U3lNb z7vFeq7sXKMQ#|dScAzc=cv(|XfPdBjxW1KN8?PM3h^g4bgxM3V<}2sTwCX=M zY5wD*cKo0G#bzCDRruMc-P&I-_5OeLQWySzc#H!F_4YITul;-Gf9ZSR0iG@bPho_2 zha2#&IY$jrAPYN^Z^{O~7I!vCeRL6gbSsog^YN{Oug<;C`tgXDfB0XIqssjD>Q3l( zu+3a&>a>hC-M3xo%CKw3?*bX#e3xQMxNb|Ca6Llh3tzg%*L+V46}UC;n~NtrxNBYR z32Op@S7Fd*nQ6x0Vdd(`ubl0p)bYv)!TXnQ3GXur#;3AN+D}i4k2@rA7<_aT7jyH| z+BiuHRbW1xtKlfKt^)~Ak5PJ#mGwr;RVFxgR-y+y9+jGNq~1LXCw2aDlH>^oLMdx} z&+FWCr1&;?`^5UbcXlsG=Zrvz5K-nN3oM~7-ZohjN8RcgGia88g~)y{}DV~3YRV$W1ITuLy64ef;4H;ESqOuoY5nAPn^;|V@K?C_I7TAs0g>^qB!e@I}bo#2)vWH zdE4$vlRRPKVL8QfL#vehdDh5H2Vot{9qYq8`be{2Mp#)%GZ=dQIx%5v?cp}5VpRT@ zR_>tnT#&7J^?;ZYQZUL7FGVl0pALCS@<`-ojE~e=DMs#-WK>$beTGvQKNo#E4h&+^ zY9A%Ru`Z7%BWw=(*Y{OySawa}*+&+^0!4`l-MtT2SrJkB2JE120xXNd;5+qiU#~a# z6KyMW@nwOml%sz8YeBK2X!ck|4WXmG!$3HzlK39U5J786#w|4lun(m~g~+F6x4jUP zy+6GrG$GLH(e(_C$4OWxy71IZUVI9+^8pfFJz{35%(TrH7gc`Ozr!1%te6{8d3%QjZARbWI=H}U)sE?eu%y)IWS!k|dU_8+4%z!gA8E-# zcqn>O9i4BMGnQM>-!I}qS{e)l;|cutGjjm+AFOM~hLV6lxPp`8_i?|vWO{%!+boVIK&VQj}t-GgFt^3huTWj*}I7Ay}`;AF7{ z1v+=|Q#@X|E?R-oUj}5}$}v0DM|GWpz+!&ST<_baQX4@LvoCQa0-2;C*`w_9cGR>K zFCSc>Y`188nTq*iZ*_ZEL;-06rkfT<{ml3Rs~s_V`zUB&Y-yE8u4>e5{N-Q<#i(Lc zZD)gz$V?Ye6uPG@#{%||@+YL#9#lTCVSyjT3W3n}(^%NkIJ|iy%TonTFT>NKKj#M` zK=Ad4hM^3imaP-ah8^O#noaP2Csl+4dkd$lat zmm!1KBQvyz`Z^SBBtpm-dsv%on>ZCa+Cv#V@UN*vDWc%IRp?+Q(vzmB@LyFi+_)6N zmsPvkg{dZ-7;@T_{n;!N=nrdnZDfzpFuw|vB}im~0w+2*YScPgC-5P8=R1Olmuulc zXYb(G(!G?45LixTH!tJ0>uMMZ|31GxM;R)dl;5bHFwtA(>bL;7E*vn()P|n8y}g2YNFWjs&u-{0bHEpH!KxAJpn3 zVdzpG7cW@U<5x!K`{C_NDp%r|NV^X2PcmAw$=)I)OS%Gyr#-yrJn+oLlN%*n`(s9r zjv zHaC<}-`3Uf(~B@%lLSHmb4eUUogOt$|T|wr|cxDyuvT zTD^Hswf5V+D4L!tH9RI1h~t#+H;#Xvy$D*wcZqLXbSa+-ou#Ah0^MUzcQfZ~XWWq* zCQJqs%w;gF;g%ldsA;4Vy^XAv(~ubnMlO!>e>PwDRY*T^iQ><2lN2v(`HrXyBK z-?-VXY_{(oPMDLgbL_gkp)(PthqcU6ovv-S8=gNib9zQ3 z3tyNsoV*#ku`zv%;=0MH=g51)*xBAe@gnF!rceZX{sS+`&@3vB1oPegBBZUQ%NDJJ zMPMD>jZdHHQoNk~KhFS&c_#w)R2eF}aUU=C?6B{f(bR_|L3{<~urceI6@wJdg;q@A zqXdNbAr%RAalmcSmL#jpR;`V37LtB{EiLGb|6RvwnW<@G`P$M~0zrda2Pp`jEOjW+ zBK01)nWgWcte^%zVKo7O%Er)Moy%v+f&xO9T5UWWS>ccI6INj&Sx!L2hmVDlh)3*L zj|Za=7vl_HP+#eiF*`XNM`%VyoyNM859%-p)%Ble!O0 z!2;Wf{zG|;>c)?m!7XjpN{=mF2m9Nk+LP=_op)gxwxgXT^rM4w zaqi>;>-&M`yYn(&K4q1d`in_Nb4VTh2G9JnnOPBf=b~yp7WpH&;LpN@_i%NzG&>mq zPGops)@_!A9f-S^tkQRRVJfd`c-u3x_K3HCf-m~*s|(0X#cMmKu>4fAcOmow&s3?| z*w;I@Yf%GbI~HS^HY8F&-npGLWz9*31o{vcBFQ4Z>q|E5I8AJ2dzL$Nqc304Yx|X; z;?~IiB_j`sc2(VD>0H6xm|T)A%K13&QLCLCJ^Dvw0LC5Z0LEuIu2YFkF%cMgfs?wG zhbw9*(ylF#TF&wIT2NQJdFD!d%oDhO9B?LZK<_nLQZHjL}Sm8?B#TGoRv;9hr z>`rilGs$GP2^8zgBFin+y!6(vp?oI>{8E*uf#{N+jNMe zLPgb)f{$@ zT|}0>srpHF_=HaGQkU0GDP&;ftA5!v%+jX)y~T4WO63_cI{rHECu*6?+!j~%!w?{V zqhVNYBC#y!Y+lmy<1dS&JD)cppaaTQdk^mbUR3ZyHSG$umlzsn9r_S~W!8zj+^+{= zsnMlPo^kJ@a}#8#c4W~vJ_qtn0Pqq4aZe{vn6}Km#RV3a@>J@W({6g%|7{097Nr5n;b^l8pSa1pD3lYTN!QwFI_{Bo#G(ad>3?&B z@Jflm0Ms1gRgs;!gSmZlCCIcSXRfyo+N3d-Nx?pdTd^ghr3exI)TiZ2B~MTF^JhzF z-lt%SgOJsm9ZWil;gSLZT?<|wfZ&ise*4R@hvFzk#?7my)4 zKsORK{0eBOv#MVPEvkWX@1~89d^@)fq?l@4c8lqTXyd&Y<62Y0^oqiIE?JrW*(Mm%Sco(Sv}KKIY38Y+gSdWW;B2|rO;6lgmV%$#B}-iGw7JCi$dC9PKq7}nYT8up?p{Midz2DRiZ4~s^oeY zZypkMGaDOi9j!x}t@V$c$Fz+#HW!FXIy9UR>x3Js~=Hfghx_)9o`_>&c zvAj1thoFCDZbKDtAa6`}E5KiCg-DvyJuw@uiK|iDy=ydDJSRfY%qEX#-)T5CUjMG7 zs`er;o`lE~Jw9di>sql{d&uqIV$>rgRd*|5e)(NN!{eTlzgpohsN;fSn+Od~>>-(_ zkxIR!4Ekg&9$&!%{Ub99mTK1C6~!Rl(VC_0Dobf4<;eiDf+XWTZ+g^Em-1Pp^{vA{ z!NbuLit55IjuxrIb~>4*er70Fwo{e$azvgUuM}4K|1vawXDMFY%F6{lP^zMS)VLey zDD?$jzdeIu#-eSJs*ata)!s)4oIUP4E1dHulvS9_&h@T`r5 zOm$6O&$4e~`uMEi(#4u6IcA5+^iWyU=V@?8_>4m$p^}ZB-={mkCXG-@Yi+dC`ft>!=AGsMjnzv>Z zUAi)x_mb%O7m5?WZJrg)OxGV%us}ilh5fs6It3BFL_} zC;}?nQnp_GC_)Hdkc)|H$?_pxP`wYQ~C=4gA8g+C?o7hv%>Cu`f|DcOC~Ou-RH4)5JZi1t81A5VR~sdi5Ocb!o*H)-&bxWV){v`Hi)yF!Q96)|1nW7e^>ylVEH_gnPvZs-+U zzuwaSL>h)|HYicc5%*+DlLNB8hf|oHP5l~)L3dc zmi7c{L|eM$YV+KZd)YS)46%`s#oCjd>ps>qrd-Ic%Kx;Hi_lk@OQfTn?Bs{v;57sf zYF{mdP22;vAmTMC_U(%;0fT1Kkg(Ga)uL6xC3`_sMC6QLkIVP zX3DZ2%|V!6jbqX1dXar$Lhd;m{)aL~2lMH8d9Hb;Ih6Qvc_q02VP#pC7j1gwi#NXR zhZoZx%YOv`L)#;@(?jmNK337~S{TmsZE8FeGul1b6n=g%W*;}Q_Wdogn1zxRaEU-WOAuib1sunld5(q1}QKj(mZZnZ+QD zUSy|e_2W|s&1-Kk|5)dJm2jeG@!NtBEmVnfcY(}3${Q}BLANe`bIM%81m+;fWkMaWkcpS-3s>tq4!9coV*M`7^+aH3Tst)I*?c3lvjq?7J=nVV6 zPN8mmVmIE)pLd0$Wt?T|$CioXF7vF=h@1pnV(54Vt*uIsUc_Aj1=e7PX*Ccg-*py* z^I_5M`?^(;GbR)N=6$v0`bk*Yi}Zhbsu2Hor^W+zYO8#XSf`xG{=Q{wZF92ebO!Ck zZ>PJ@5%;o}Gg7u)1M!F^rynJ|9+*m)`toVaQ=dkTHOawM)K_oKWB)pFfa9+4cZr%Y ziNsnJFqX&i`23bk~zS#g929teLUt*D1p+5;OKel|B1fj*eSZ^8?L( z*Hd4i|5@Rps4IVj=EIp>7T1D<>MXeefwJ6@<(__uYQ(GT!a+M)9ticlG#4qk&poe> ze+gs{W;zYa7~KpTGH!Q$l!Eu>x8f{iw~gGj7wH0AO4>}T{`FAM z(F-WSn#GkwDzIG@_-G^-&{02YA9!q=!2uGxPg>qYpoV0JvhzmL=El5wnYUnsxk zD9d{oq{qv?UNmHdl!a*1yDVOdg7_!ziMs#Wf97Pamghc%TyUU$(?cFL5Sb$_N{t)O znd@7J636wBkR}}{RrXk{&~Q80yWe0-&p=haO?4=9@>BUmW)XaDu>cMp14V&;@m1!I7wTKkKDxb zUc7N!B-z1aai?HTbo0?{*W)32=S5o})YTdo6jw~1~KZ^m79onV#HU%apTLO zy{TSRl=CPQkw$$4ndc;@`jFk>Vc}mBI<9w!HM3Yv*}MyKP3Xk|8B$J5skz5H5R|q~ z-Q5MB37zGDtdaU|sXF{JYQIM*fQ`WZOLp&?_fKitP=zRlr6cC5;`n$e4&O5`Ro%W# zo-v`FmoEF2`ObJ|&UmB0Is6M9<4b(2ui}It3{%~6m8BI~!)g^qd<`YQG1iz=eVxpIhikmaSzE5skVvGKD*gSsB(8a~vgME^miq+VcIwG5q)7kcUdG zUG&=bp|TKoX0=m_>aU|bLB!VS8D9(Fl^p{S6K`SEuSH!NDS@BY97=txwEHwxNH_0Y z8%HgTF;j9+km7ubPi_)-`6G0%)9ir3p-y|!QG@+O{UMI-5w`t^W-HM^gC5Ua@}$B$y!=C0?D zzJpBjq+#T}bdO7vXNs(p?{#!#J0%6XgaI^Tpsa^SnlvDHtAi#f^8L-IRJC5);_*yB(Ai+qg=Jz!#(M#svoc5# zfGr+CG|_v{xpA20{g~~KC#)LuLdv_9*0M-db#<1azOF^GmwzU;^CV#qs~hpoPwBu0 z7StL+yo6&l80CBSN?YqltXV;QFz&K027D!ZKfoe8(P|)~BzliPM9;pGXD%-$oUJZo z(6G)|bv7(J%p4dBs;HYTy&7Q%%j!1E%7C86dvO|FPh&NOm(LHhq4=~1SVQkU6hwsG z(yw_r2Lsk8=P4ZOH+_B0hV0+q@SJ(ZUs}%%(T=M#u|Kb)&=4kcym}en zf)jrNBh0z~xy^|A{oVX-5MEJ3NP0@brp2_FWM&P;x=37YAOZSZGSSqelw2 z!{Sm*e?JdE49cx82VR5uOIY|`)&4C)x;Q{Yo_*tXAd3P?8q9p{?fW%e*CbL8vT>Q_ zE#p_ZZgF@n2s;%49a41O-^N#{eaEXHicoUA(>UWP*EIYk*Ll6uHkiqGBVPj@^{VJ zEii3nj<#gYr|Y1%W0|j0u6gGMKlS13m?mnReCBC!E$c>p1i;^~6^p1R4$AZ?y;3yd z^;d5jAy1VgsrQr`{!rYq*a++~{?ip@mLKfzyn1s*`JvFjXFK410uA^4xs-O~pWQPN>TXV$%G_P`;A|N7j5jU@`Gvt9pqyq!6J>6y zVjCo8KLcv?swMbo;gTmAi;&cEh^P_(kB89AhMx#8i??3#XLawASJDH4=@?_tdlI^NuwC80uBp`_v zyvlUyII-iO1ztSN6>E{MhO(~PyjD#HA+LYCN4VCFW}Fij^m1K}dr-p;^l)IEBCPh_y_Vxhac+G# z09`W5;=Ag2_}Tc_{(dCJPcir4{&V{fTb<~o{$TzA+IR{>Y=n&bEfsZCV4@lQp1yh4 z%J@e<>U;253LX~)fzKo(ahp#yPk$ZtD+{_RIMJq4M3r;Xr-FnX8m0Xu`x}$8BxLTr zMfZ>(Rk7rPFt_oLoDh~#Wk&+b8iL3D1DREK0WX4X_lbZ=l?((Tu}}>(OfAA*K7Z4h97TzJ zN?_m2dU8Nx;BcK987kldVXb1cW87w|C>NE;OGI=Pjo}Ko75-ePERqf%tbH~D*WBIM ztmdWQc91^50a9c42~H_dSm^&%?$)zdbqCb-(0q`z*|JV#xw|V1N-j8sHvjS@-jU4$ zNWK(_r~D!Dc6fiQp{aQvvE};cg4Ek@QF5bV!1!432|Gu+OF1!A@SFxKsZ11JNW%ls zCU=?YTkWIEywks~{n(XgkaK20EcqJKPLJb~=h`);qUT9>=GsmAc|Lz6y*+BoC54Ox zs%26b9TXs2`VJ+;$xEL#)LES1!hxk?_GprAx#I0>;dO~@uoCr~kx*N@neQE4q6ZKW z$n7`G57|G8v-+GZ^3n0za+%C#Zg^U~>teKtA&RBZkLk6VOQ%ALS0 z_%pktWUUjAJ%Meo89jURjaW(x2hz)ufprtS?XB-!SpwS2{$B+jCn7!`yq)Nb1<7)z z@K**;!Yztf;TN=Mh)(372Ryd~xvOfS*C|Scc$A_{ zd?Q7|CGoxEZz;A=Dah{Ky_CyMLZdmNb@2re% z$-cgt9T%6G*mh%km!?|rikg%1gS>H;XM(OhIrJflg%aYi%lz#4f4(MRKXe<*lHu{- zxQtEZojbxmoKr>l5E5K7@}Qk#JftvnUU!sW?qb~@H4kYU&xupSZ;J_NH%wOf$j?ND zt?e$;rhS=x4-$u0*-SxbhJQ4ZLHBZLguFM(e-gAc)Nm``uvr!U^*o_?!kG%s*7-{b zlaPSG^N#zeWk}q<{6zufTt|P>s|QH6IR+4g>5Gf@)GFd18R~LJl$L_BB#*=&2TJpX zXsWAWkuamZWG@jNaOYP!^Li)a5U%kPZNN&GJEX6SMWSRXjEp7AW)r`+lpmk2)VExm zpANpQTFVxHJ^R4+$iH5WG9D~KZk+{V5E{B0PwbJ)rUL-RL-dkJS|4yY`%l6%1Y5R7o zUJ~6QM5tKPpNk3T#AwIMSmR+M470U=rr1$o3zTWObAcn=h1lGIGQ_R*wwPeMqOaWj zp9a6&Sa|R>Ieh~q&K&^L`-(oYjeR{JBV1WCb581sarX2j%tek|ubCL|)iwXZ0N0YsXPmvc` z0J6^NQJdUFKx@RQckOWERfB_R$ls!0OehV|s}Wh6ebsSsJMcd@dH%mNdDP!}{$I-3 z{?}dvpxo=f+dTfe&Evnjc7QId|Jz+V3Jyu#W67VofT~G@Pi>0QyYG&Xlt(8sr@vEW z|IENv4j#|G>kyVdk0E<}2Wa1TcAPi3`=ZaF3G1_A=Ft)+o1H{mVbjR8C!d?HnP9Ax z^i)&jw$j(cgpK*awZmbcmE*%@P3jcY{SSpPAq@CvnwuN=egUU%cY3ud9Di3!-!yG9 z=lXW1#IE%_N61{$%av!B9Or(7n_u6MyXb}!!&U}+j#0L*!?gpC$HUX7hcVlf!Ql{O zA+N^W&DSF$AzzbU`@6kRUs7iSiuccqrsXDqlI=f{bv{$ki3y<`Zx9F)hQR1xho8x- zBk!nf!}hKj%a!?ZnN;+J&M&QW5zVC4FY77}KU2NxSh;RhdPDbhIR0dYGNBoERx=(Il8pyJvKt7^F*2H@LmQ)+9-}eD}KMSuvylLQw=+6-) z(32IVV-{-xDHBt5K5lMw{Pu8g2@6+u{QdElQ2(hDnCOQ%msG?&tbHqj#!l#n4gn5@ z+_j#NOCSmGoZ_g}m8Rv4JQ86Nv`#@3L}FpsSK3eLXkNeuDgfXQ7OL39aPtiyIzg<7fy(fpFPo&<u1x8>!*{|m?hfgzf*U|DuX+8Ww!2R&$2z-;(aAZ z$?e{ue3(H6G&XLJWm6W8U87C!snZFtHYco(lsvMn=TKmBE;bL^1z%jshpu{&VadqDQGR49DYqiGQ&ac4f$9Sa`|MY-)$CB^}mCq zmXAte?Ttbt3qlv9n`-P`Xf9h1hd0uP%33?;IFG;b_2Dnw57hV zDK?C=H*$sENe_IJRmw)_W648@WiMcTs?4MgUn?||=OVW9%)<8D$Vu`~E$=d!5q1lN zW7cCA-(2VHf2EO}>{j8pwP0fGGneohk9x-3}CV{A;S?1j4J-ot%YcWsUbLhFP z(KqQ5s$Mez&ZZWJ_V_6HgKyT~7TT14N&dfa_Hh5+Ztxf9Jk+K~K+HgH6=jh8UBZCD zvSho6i)-DSzDnLs!9zhcQ~#?vfdC@p_EisO}T`;NAm*n)?IZvvmIXFk^aaQh+XQ zv;Fc*R`AL2D}#?H)+hd&@04iMqyJ3b_i%n&V$z+pI`it(4xD_zu7424JM`Yf=$mYf zE{&bzXQtVeae|O(2AXt^z%%AGPCL$U;seqc=Lz+!)g7sKt?sYQ8yt)G^HmnOL$_zI z+7An{N0nD{`yU>9M6-?u>CYwEza(yEs+*QP`osm@FCrs$INSGW?IV}BORUlJU?6-N zxG{rw$1E)UP-lubf4e!II*w9k`s<>@IyS8F%<75n&r(IMTc5i)Bxj-%w!#Sset#4p zfpY9^C*^ zG59H;ouK0ETQsivN*N5QG&|>BFa4Q~Ky2lgcgme^#hzj~nP~Vk%fb(*Ug@vt+`i(N z7r?PPJ$RToGmwd?eZMw;lhzyPXF4b|+WYdM39-T@I9f;T5+0@X1r5Ibz4j=@*hS*rkCd7de9M zk4z-Mwbnll7jT|cwJu0_-Ll~letSu5`MZjaCx2Fc!Xu^yn`4pJ>!@eXX|lz=II#ia zd9`RJr|FaeINM_?X_n+q)+s`QjfAD2Q2hlB!96-sOJzo6?jB(-(o>E@#6qPUS2r=DAlpc zFUh0LvDp}|VZj>Y#o3--Snb@MM|X=D)HCW(6fx%^J7eZGC)AJRkK`4b>6iseJ4(3x z9?CKh)(#Z@4U2YPCPOO7|I6TS`*oTO`sS0*Zg`W|N1-O%fT@-Nf7X( Np{lD=u4EtcKLB?}vJC(L literal 0 HcmV?d00001 diff --git a/docs/install.md b/docs/install.md index c3e5d3ea4e7b..34c390ae5835 100644 --- a/docs/install.md +++ b/docs/install.md @@ -1,84 +1,50 @@ -# Download the k0s binary +# Quick Start Guide -## Prerequisites + In this tutorial you'll create a full Kubernetes cluster with just one node including both the controller and the worker. This is well suited for environments where the high-availability and multiple nodes are not needed. This is the easiest install method to start experimenting k0s. -* [cURL](https://curl.se/) +### Prerequisites -Before proceeding, make sure to review the [System Requirements](system-requirements.md) +Before proceeding, make sure to review the [System Requirements](system-requirements.md). -## K0s Download Script -``` -$ curl -sSLf https://get.k0s.sh | sudo sh -``` -The download script accepts the following environment variables: - -1. `K0S_VERSION=v0.11.0` - select the version of k0s to be installed -2. `DEBUG=true` - outputs commands and their arguments as they are executed. - -## Installing k0s as a service on the local system +### Installation steps -The `k0s install` sub-command will install k0s as a system service on hosts running one of the supported init systems: Systemd or OpenRC. - -Install can be executed for workers, controllers or single node (controller+worker) instances. - -The `install controller` sub-command accepts the same flags and parameters as the `k0s controller` sub-command does. +#### 1. Download k0s +The k0s download script downloads the latest stable k0s and makes it executable from /usr/bin/k0s. +```sh +$ curl -sSLf https://get.k0s.sh | sudo sh ``` -$ k0s install controller --help - -Helper command for setting up k0s as controller node on a brand-new system. Must be run as root (or with sudo) - -Usage: - k0s install controller [flags] -Aliases: - controller, server +#### 2. Install k0s as a service -Examples: -All default values of controller command will be passed to the service stub unless overriden. +The `k0s install` sub-command will install k0s as a system service on the local host running one of the supported init systems: Systemd or OpenRC. Install can be executed for workers, controllers or single node (controller+worker) instances. -With controller subcommand you can setup a single node cluster by running: +This command will install a single node k0s including the controller and worker functions with the default configuration: - k0s install controller --enable-worker - - -Flags: - -c, --config string config file (default: ./k0s.yaml) - --cri-socket string contrainer runtime socket to use, default to internal containerd. Format: [remote|docker]:[path-to-socket] - -d, --debug Debug logging (default: false) - --enable-worker enable worker (default false) - -h, --help help for controller - -l, --logging stringToString Logging Levels for the different components (default [konnectivity-server=1,kube-apiserver=1,kube-controller-manager=1,kube-scheduler=1,kubelet=1,kube-proxy=1,etcd=info,containerd=info]) - --profile string worker profile to use on the node (default "default") - --token-file string Path to the file containing join-token. - -Global Flags: - --data-dir string Data Directory for k0s (default: /var/lib/k0s). DO NOT CHANGE for an existing setup, things will break! - --debugListenOn string Http listenOn for debug pprof handler (default ":6060") +```sh +$ sudo k0s install controller --enable-worker ``` -For example, the command below will install a single node k0s service on Ubuntu 20.10: +The `k0s install controller` sub-command accepts the same flags and parameters as the `k0s controller`. See [manual install](k0s-multi-node.md#installation-steps) for an example for entering a custom config file. -``` -$ k0s install controller --enable-worker -INFO[2021-02-24 11:05:42] no config file given, using defaults -INFO[2021-02-24 11:05:42] creating user: etcd -INFO[2021-02-24 11:05:42] creating user: kube-apiserver -INFO[2021-02-24 11:05:42] creating user: konnectivity-server -INFO[2021-02-24 11:05:42] creating user: kube-scheduler -INFO[2021-02-24 11:05:42] Installing k0s service -``` - -## Run k0s as a service +#### 3. Start k0s as a service +To start the k0s service, run +```sh +$ sudo systemctl start k0scontroller ``` -$ systemctl start k0scontroller +It usually takes 1-2 minutes until the node is ready for deploying applications. + +If you want to enable the k0s service to be started always after the node restart, enable the service. This command is optional. +```sh +$ sudo systemctl enable k0scontroller ``` -### Check service status +#### 4. Check service, logs and k0s status -``` -$ systemctl status k0scontroller +You can check the service status and logs like this: +```sh +$ sudo systemctl status k0scontroller Loaded: loaded (/etc/systemd/system/k0scontroller.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2021-02-26 08:37:23 UTC; 1min 25s ago Docs: https://docs.k0sproject.io @@ -89,67 +55,42 @@ $ systemctl status k0scontroller .... ``` -### Query cluster status - -``` -$ k0s status -Version: v0.11.0-beta.2-16-g02cddab -Process ID: 9322 +To get general information about your k0s instance: +```sh +$ sudo k0s status +Version: v0.11.0 +Process ID: 436 Parent Process ID: 1 Role: controller+worker Init System: linux-systemd ``` -### Fetch nodes +#### 5. Access your cluster using kubectl -``` -$ k0s kubectl get nodes +The Kubernetes command-line tool 'kubectl' is included into k0s. You can use it for example to deploy your application or check your node status like this: +```sh +$ sudo k0s kubectl get nodes NAME STATUS ROLES AGE VERSION k0s Ready 4m6s v1.20.4-k0s1 ``` +#### 6. Clean-up -## Enabling Shell Completion -The k0s completion script for Bash, zsh, fish and powershell can be generated with the command `k0s completion < shell >`. Sourcing the completion script in your shell enables k0s autocompletion. - -### Bash - -``` -echo 'source <(k0s completion bash)' >>~/.bashrc -``` - -``` -# To load completions for each session, execute once: -$ k0s completion bash > /etc/bash_completion.d/k0s +If you want to remove the k0s installation you should first stop the service: +```sh +$ sudo systemctl stop k0scontroller ``` -### Zsh -If shell completion is not already enabled in your environment you will need -to enable it. You can execute the following once: -``` -$ echo "autoload -U compinit; compinit" >> ~/.zshrc -``` +Then you can execute `k0s reset`, which cleans up the installed system service, data directories, containers, mounts and network namespaces. There are still few bits (e.g. iptables) that cannot be easily cleaned up and thus a reboot after the reset is highly recommended. +```sh +$ sudo k0s reset ``` -# To load completions for each session, execute once: -$ k0s completion zsh > "${fpath[1]}/_k0s" -``` -You will need to start a new shell for this setup to take effect. - -### Fish - -``` -$ k0s completion fish | source -``` -``` -# To load completions for each session, execute once: -$ k0s completion fish > ~/.config/fish/completions/k0s.fish -``` - -## Under the hood - -Workers are always run as root. For controllers, the command will create the following system users: - `etcd`, `kube-apiserver`, `konnectivity-server`, `kube-scheduler` +### Next Steps -## Additional Documentation -see: [k0s install](cli/k0s_install.md) +- [Automated Cluster Setup](k0sctl-install.md) for deploying and upgrading multi-node clusters with k0sctl +- [Manual Install](k0s-multi-node.md) for advanced users for manually deploying multi-node clusters +- [Control plane configuration options](configuration.md) for example for networking and datastore configuration +- [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments +- [Support for cloud providers](cloud-providers.md) for example for load balancer or storage configuration +- [Installing the Traefik Ingress Controller](examples/traefik-ingress.md), a tutorial for ingress deployment diff --git a/docs/k0s-in-docker.md b/docs/k0s-in-docker.md index bfcdce2b4e67..0999bf8f9251 100644 --- a/docs/k0s-in-docker.md +++ b/docs/k0s-in-docker.md @@ -1,35 +1,53 @@ # Running k0s in Docker -We publish a k0s container image with every release. By default, we run both controller and worker in the same container to provide an easy local testing "cluster". +In this tutorial you'll create a k0s cluster on top of docker. By default, both controller and worker are run in the same container to provide an easy local testing "cluster". The tutorial also shows how to add additional worker nodes to the cluster. -The containers are published both on Docker Hub and GitHub. The examples in this page show Docker Hub, because it's more simple to use. Using GitHub requires a separate authentication (not covered here). Alternative links: +### Prerequisites + +Docker environment on top of Mac, Windows or Linux. [Get Docker](https://docs.docker.com/get-docker/). + +### Container images + +The k0s containers are published both on Docker Hub and GitHub. The examples in this page show Docker Hub, because it's more simple to use. Using GitHub requires a separate authentication (not covered here). Alternative links: - docker.io/k0sproject/k0s:latest - docker.pkg.github.com/k0sproject/k0s/k0s:"version" -You can run your own k0s-in-docker easily with: +### Installation steps + +#### 1. Start k0s + +You can run your own k0s in Docker easily with: ```sh docker run -d --name k0s --hostname k0s --privileged -v /var/lib/k0s -p 6443:6443 docker.io/k0sproject/k0s:latest ``` -Just grab the kubeconfig file with `docker exec k0s cat /var/lib/k0s/pki/admin.conf` and paste e.g. into [Lens](https://github.com/lensapp/lens/). -## Running workers +#### 2. Create additional workers (optional) -If you want to attach multiple workers nodes into the cluster you can run separate containers for each worker. +If you want to attach multiple workers nodes into the cluster you can then distribute your application containers to separate workers. First, we need a join token for the worker: ```sh token=$(docker exec -t -i k0s k0s token create --role=worker) ``` -Then join a new worker by running the container with: -``` +Then create and join a new worker by running the container with: +```sh docker run -d --name k0s-worker1 --hostname k0s-worker1 --privileged -v /var/lib/k0s docker.io/k0sproject/k0s:latest k0s worker $token ``` Repeat for as many workers you need, and have resources for. :) -## Docker Compose +#### 3. Access your cluster + +You can access your cluster with kubectl: +```sh +docker exec k0s kubectl get nodes +``` + +Alternatively, grab the kubeconfig file with `docker exec k0s cat /var/lib/k0s/pki/admin.conf` and paste it e.g. into [Lens](https://github.com/lensapp/lens/). + +### Docker Compose (alternative) You can also run k0s with Docker Compose: ```yaml @@ -58,8 +76,17 @@ services: # Any additional configuration goes here ... ``` -## Known limitations +### Known limitations -### No custom Docker networks +#### No custom Docker networks Currently, we cannot run k0s nodes if the containers are configured to use custom networks e.g. with `--net my-net`. This is caused by the fact that Docker sets up a custom DNS service within the network and that messes up CoreDNS. We know that there are some workarounds possible, but they are bit hackish. And on the other hand, running k0s cluster(s) in bridge network should not cause issues. + +### Next Steps + +- [Automated Cluster Setup](k0sctl-install.md) for deploying and upgrading multi-node clusters with k0sctl +- [Manual Install](k0s-multi-node.md) for advanced users for manually deploying multi-node clusters +- [Control plane configuration options](configuration.md) for example for networking and datastore configuration +- [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments +- [Support for cloud providers](cloud-providers.md) for example for load balancer or storage configuration +- [Installing the Traefik Ingress Controller](examples/traefik-ingress.md), a tutorial for ingress deployment diff --git a/docs/k0s-multi-node.md b/docs/k0s-multi-node.md index 96f2fa38f69c..5a63f74ced93 100644 --- a/docs/k0s-multi-node.md +++ b/docs/k0s-multi-node.md @@ -1,13 +1,32 @@ -# Creating a multi-node cluster +# Manual Install (for advanced users) -As k0s binary has everything it needs packaged into a single binary, it makes it super easy to spin up Kubernetes clusters. +In this tutorial you'll create a multi-node cluster, which is locally managed in each node. It requires several steps to install each node separately and connect the nodes together with the access tokens. This tutorial is targeted for advanced users who want to setup their k0s nodes manually. -## Prerequisites +### Prerequisites -Install k0s as documented in the [installation instructions](install.md) +Before proceeding, make sure to review the [System Requirements](system-requirements.md). +To speed-up the usage of `k0s` command, you may want to enable [shell completion](shell-completion.md). -## Bootstrapping a controller node +### Installation steps + +#### 1. Download k0s + +The k0s download script downloads the latest stable k0s and makes it executable from /usr/bin/k0s. +``` +$ curl -sSLf https://get.k0s.sh | sudo sh +``` +The download script accepts the following environment variables: + +1. `K0S_VERSION=v0.11.0` - select the version of k0s to be installed +2. `DEBUG=true` - outputs commands and their arguments as they are executed. + +If you need to use environment variables and you use sudo, you may need `--preserve-env` like +```sh +curl -sSLf https://get.k0s.sh | sudo --preserve-env=K0S_VERSION sh +``` + +#### 2. Bootstrap a controller node Create a configuration file: @@ -18,55 +37,55 @@ $ k0s default-config > k0s.yaml If you wish to modify some of the settings, please check out the [configuration](configuration.md) documentation. ```sh -$ k0s install controller -INFO[2021-02-25 15:34:59] Installing k0s service +$ k0s install controller -c k0s.yaml +``` +```sh $ systemctl start k0scontroller ``` -k0s process will act as a "supervisor" for all of the control plane components. -In a few seconds you'll have the control plane up-and-running. +k0s process will act as a "supervisor" for all of the control plane components. In a few seconds you'll have the control plane up-and-running. -## Create a join token +#### 3. Create a join token -To be able to join workers into the cluster we need a token. The token embeds information with which we can enable mutual trust between the worker and controller(s) and allow the node to join the cluster as worker. +To be able to join workers into the cluster a token is needed. The token embeds information, which enables mutual trust between the worker and controller(s) and allows the node to join the cluster as worker. -To get a token run the following on one of the existing controller nodes: +To get a token run the following command on one of the existing controller nodes: ```sh -k0s token create --role=worker +$ k0s token create --role=worker ``` -This will output a long [token](#tokens) string, which we will then use to add a worker to the cluster. For enhanced security, we can also set an expiration time for the token by using: +This will output a long [token](#tokens) string, which you will use to add a worker to the cluster. For enhanced security, it's possible to set an expiration time for the token by using: ```sh $ k0s token create --role=worker --expiry=100h > token-file ``` +#### 4. Add workers to the cluster -## Adding Workers to a Cluster - -To join the worker we need to run k0s in worker mode with the token from the previous step: +To join the worker we need to run k0s in the worker mode with the token from the previous step: ```sh $ k0s install worker --token-file /path/to/token/file ``` +```sh +$ systemctl start k0sworker +``` -That's it, really. - -## Tokens +##### About tokens The tokens are actually base64 encoded [kubeconfigs](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). Why: -- well defined structure -- can be used directly as bootstrap auth configs for kubelet -- embeds CA info for mutual trust -The actual bearer token embedded in the kubeconfig is a [bootstrap token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/). For controller join token and for worker join token we use different usage attributes so we can make sure we can validate the token role on the controller side. +- Well defined structure +- Can be used directly as bootstrap auth configs for kubelet +- Embeds CA info for mutual trust +The actual bearer token embedded in the kubeconfig is a [bootstrap token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/). For controller join token and for worker join token we use different usage attributes so we can make sure we can validate the token role on the controller side. -## Adding a Controller Node +#### 5. Add controllers to the cluster -To add new controller nodes to the cluster, you must be using either etcd or an external data store (MySQL or Postgres) via kine. Please pay extra attention to the [HA Configuration](configuration.md#configuring-an-ha-control-plane) section in the configuration documentation, and make sure this configuration is identical for all controller nodes. +To add new controller nodes to the cluster, you must be using either etcd or an external data store (MySQL or Postgres) via kine. Please pay an extra attention to the [high availability configuration](high-availability.md), and make sure this configuration is identical for all controller nodes. -To create a join token for the new controller, run the following on an existing controller node: +To create a join token for the new controller, run the following on an existing controller: ```sh $ k0s token create --role=controller --expiry=1h > token-file ``` @@ -75,87 +94,54 @@ On the new controller, run: ```sh $ sudo k0s install controller --token-file /path/to/token/file ``` - -## Adding a Cluster User - -To add a user to cluster, use the [kubeconfig create](cli/k0s_kubeconfig_create.md) command. -This will output a kubeconfig for the user, which can be used for authentication. - -On the controller, run the following to generate a kubeconfig for a user: - ```sh -$ k0s kubeconfig create [username] +$ systemctl start k0scontroller ``` -### Enabling Access to Cluster Resources -To allow the user access to the cluster, the user needs to be created with the `system:masters` group: -```sh -$ k0s kubeconfig create --groups "system:masters" testUser > k0s.config -``` +#### 6. Check service and k0s status -Create a `roleBinding` to grant the user access to the resources: -```sh -$ k0s kubectl create clusterrolebinding --kubeconfig k0s.config testUser-admin-binding --clusterrole=admin --user=testUser +You can check the service status and logs like this: ``` - -## Service and Log Setup -[k0s install](cli/k0s_install.md) sub-command was created as a helper command to allow users to easily install k0s as a service. -For more information, read [here](install.md). - -## Configuring an HA Control Plane - -The following pre-requisites are required in order to configure an HA control plane: - -### Requirements -##### Load Balancer -A load balancer with a single external address should be configured as the IP gateway for the controllers. -The load balancer should allow traffic to each controller on the following ports: - -- 6443 -- 8132 -- 8133 -- 9443 - -##### Cluster configuration -On each controller node, a k0s.yaml configuration file should be configured. -The following options need to match on each node, otherwise the control plane components will end up in very unknown states: - -- `network` -- `storage`: Needless to say, one cannot create a clustered controlplane with each node only storing data locally on SQLite. -- `externalAddress` - -[Full configuration file refrence](configuration.md) - - -## Enabling Shell Completion -The k0s completion script for Bash, zsh, fish and powershell can be generated with the command `k0s completion < shell >`. Sourcing the completion script in your shell enables k0s autocompletion. -### Bash -```sh -echo 'source <(k0s completion bash)' >>~/.bashrc +$ sudo systemctl status k0scontroller + Loaded: loaded (/etc/systemd/system/k0scontroller.service; enabled; vendor preset: enabled) + Active: active (running) since Fri 2021-02-26 08:37:23 UTC; 1min 25s ago + Docs: https://docs.k0sproject.io + Main PID: 1408647 (k0s) + Tasks: 96 + Memory: 1.2G + CGroup: /system.slice/k0scontroller.service + .... ``` -```sh -# To load completions for each session, execute once: -$ k0s completion bash > /etc/bash_completion.d/k0s +To get general information about your k0s instance: +``` +$ sudo k0s status +Version: v0.11.0 +Process ID: 436 +Parent Process ID: 1 +Role: controller +Init System: linux-systemd ``` -### Zsh -If shell completion is not already enabled in your environment you will need -to enable it. You can execute the following once: -```sh -$ echo "autoload -U compinit; compinit" >> ~/.zshrc +#### 7. Access your cluster + +The Kubernetes command-line tool 'kubectl' is included into k0s binary. You can use it for example to deploy your application or check your node status like this: ``` -```sh -# To load completions for each session, execute once: -$ k0s completion zsh > "${fpath[1]}/_k0s" +$ sudo k0s kubectl get nodes +NAME STATUS ROLES AGE VERSION +k0s Ready 4m6s v1.20.4-k0s1 ``` -You will need to start a new shell for this setup to take effect. -### Fish -```sh -$ k0s completion fish | source -``` +You can also access your cluster easily with [LENS](https://k8slens.dev/). Just copy the kubeconfig ```sh -# To load completions for each session, execute once: -$ k0s completion fish > ~/.config/fish/completions/k0s.fish +sudo cat /var/lib/k0s/pki/admin.conf ``` +and paste it to LENS. Note that in the kubeconfig you need add your controller's host ip address to the server field (replacing localhost) in order to access the cluster from an external network. + +### Next Steps + +- [Automated Cluster Setup](k0sctl-install.md) for deploying and upgrading multi-node clusters with k0sctl +- [Control plane configuration options](configuration.md) for example for networking and datastore configuration +- [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments +- [Support for cloud providers](cloud-providers.md) for example for load balancer or storage configuration +- [Installing the Traefik Ingress Controller](examples/traefik-ingress.md), a tutorial for ingress deployment diff --git a/docs/k0sctl-install.md b/docs/k0sctl-install.md index 0e05edbbf7be..b95e8abee6e0 100644 --- a/docs/k0sctl-install.md +++ b/docs/k0sctl-install.md @@ -1,10 +1,23 @@ +# Automated Cluster Setup Using k0sctl -# Deploying a k0s cluster using k0sctl +This tutorial is based on k0sctl tool and it's targeted for creating a multi-node cluster for remote hosts. It describes an install method, which is automatic and easily repeatable. This is recommended for production clusters and the automatic upgrade requires using this install method. The automatic upgrade process is also described in this tutorial. -k0sctl is a command-line tool for bootstrapping and management of k0s clusters. Installation instructions can be found in the [k0sctl github repository](https://github.com/k0sproject/k0sctl#installation). +k0sctl is a command-line tool for bootstrapping and managing k0s clusters. k0sctl connects to the provided hosts using SSH and gathers information about the hosts. Based on the findings it proceeds to configure the hosts, deploys k0s and connects the k0s nodes together to form a cluster. + +![k0sctl deployment](img/k0sctl_deployment.png) + +### Prerequisites + +k0sctl can be executed on Linux, MacOS and Windows. See more details from the [k0sctl github repository](https://github.com/k0sproject/k0sctl). For hosts running k0s, see the [System Requirements](system-requirements.md). + +### Installation steps + +#### 1. Install k0sctl tool + +k0sctl is a single binary and the installation and download instructions can be found in the [k0sctl github repository](https://github.com/k0sproject/k0sctl#installation). + +#### 2. Configure the cluster -k0sctl will connect to provided host using ssh and gather information about the host. Based on the finding it will proceed to configure the host in question and install k0s binary. -## Using k0sctl First create a k0sctl configuration file: ```sh $ k0sctl init > k0sctl.yaml @@ -30,7 +43,10 @@ spec: user: root keyPath: ~/.ssh/id_rsa ``` -k0sctl configuration specifications can be found in [k0sctl documentation](https://github.com/k0sproject/k0sctl#configuration-file-spec-fields) + +As a mandatory step, each host must be given a valid IP address (which is reachable by k0sctl) and the connection details for an SSH connection. k0sctl configuration specifications can be found in [k0sctl documentation](https://github.com/k0sproject/k0sctl#configuration-file-spec-fields). + +#### 3. Deploy the cluster Next step is to run `k0sctl apply` to perform the cluster deployment: ```sh @@ -84,9 +100,15 @@ INFO k0sctl kubeconfig And -- presto! Your k0s cluster is up and running. -Get kubeconfig: +#### 4. Access the cluster + +To access your k0s cluster, you first need to get the kubeconfig. k0sctl does this for you like this: ```sh $ k0sctl kubeconfig > kubeconfig +``` + +Then you can access your cluster for example by using kubectl or [LENS](https://k8slens.dev/). +```sh $ kubectl get pods --kubeconfig kubeconfig -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5f6546844f-w8x27 1/1 Running 0 3m50s @@ -141,9 +163,15 @@ INFO[0027] Tip: To access the cluster you can now fetch the admin kubeconfig usi INFO[0027] k0sctl kubeconfig ``` - ### Known limitations * k0sctl will not perform any discovery of hosts, it only operates on the hosts listed in the provided configuration * k0sctl can currently only add more nodes to the cluster but cannot remove existing ones - + +### Next Steps + +- [Manual Install](k0s-multi-node.md) for advanced users for manually deploying multi-node clusters +- [Control plane configuration options](configuration.md) for example for networking and datastore configuration +- [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments +- [Support for cloud providers](cloud-providers.md) for example for load balancer or storage configuration +- [Installing the Traefik Ingress Controller](examples/traefik-ingress.md), a tutorial for ingress deployment diff --git a/docs/shell-completion.md b/docs/shell-completion.md new file mode 100644 index 000000000000..7fd78797df07 --- /dev/null +++ b/docs/shell-completion.md @@ -0,0 +1,38 @@ +# Enabling Shell Completion + +The k0s completion script for Bash, zsh, fish and powershell can be generated with the command +`k0s completion < shell >`. + +Sourcing the completion script in your shell enables k0s autocompletion. + +### Bash + +```sh +echo 'source <(k0s completion bash)' >>~/.bashrc +``` + +```sh +# To load completions for each session, execute once: +$ k0s completion bash > /etc/bash_completion.d/k0s +``` +### Zsh + +If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: +```sh +$ echo "autoload -U compinit; compinit" >> ~/.zshrc +``` +```sh +# To load completions for each session, execute once: +$ k0s completion zsh > "${fpath[1]}/_k0s" +``` +You will need to start a new shell for this setup to take effect. + +### Fish + +```sh +$ k0s completion fish | source +``` +```sh +# To load completions for each session, execute once: +$ k0s completion fish > ~/.config/fish/completions/k0s.fish +``` diff --git a/docs/user-management.md b/docs/user-management.md new file mode 100644 index 000000000000..980d72493a17 --- /dev/null +++ b/docs/user-management.md @@ -0,0 +1,22 @@ +# User Management + +### Adding a Cluster User + +To add a user to the cluster, use the [kubeconfig create](cli/k0s_kubeconfig_create.md) command. This will output a kubeconfig for the user, which can be used for authentication. + +On the controller, run the following to generate a kubeconfig for a user: + +```sh +$ k0s kubeconfig create [username] +``` + +### Enabling Access to Cluster Resources +To allow the user access to the cluster, the user needs to be created with the `system:masters` group: +```sh +$ k0s kubeconfig create --groups "system:masters" testUser > k0s.config +``` + +Create a `roleBinding` to grant the user access to the resources: +```sh +$ k0s kubectl create clusterrolebinding --kubeconfig k0s.config testUser-admin-binding --clusterrole=admin --user=testUser +``` \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index ec5a7f76b703..93cf95190359 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -10,25 +10,28 @@ copyright: Copyright © 2021 Mirantis Inc. Date: Tue, 9 Mar 2021 11:42:43 +0200 Subject: [PATCH 2/2] Install tutorial name alignment Install document name changes and minor content changes. Signed-off-by: mviitanen --- docs/install.md | 6 ++++-- docs/k0s-in-docker.md | 5 ++--- docs/k0s-multi-node.md | 4 +++- docs/k0sctl-install.md | 3 +-- mkdocs.yml | 6 +++--- 5 files changed, 13 insertions(+), 11 deletions(-) diff --git a/docs/install.md b/docs/install.md index 34c390ae5835..f5e35272f9f3 100644 --- a/docs/install.md +++ b/docs/install.md @@ -1,9 +1,11 @@ -# Quick Start Guide +# Getting Started In this tutorial you'll create a full Kubernetes cluster with just one node including both the controller and the worker. This is well suited for environments where the high-availability and multiple nodes are not needed. This is the easiest install method to start experimenting k0s. ### Prerequisites +This tutorial has been written for Debian/Ubuntu, but it can be used for any Linux running one of the supported init systems: Systemd or OpenRC. + Before proceeding, make sure to review the [System Requirements](system-requirements.md). ### Installation steps @@ -88,7 +90,7 @@ $ sudo k0s reset ### Next Steps -- [Automated Cluster Setup](k0sctl-install.md) for deploying and upgrading multi-node clusters with k0sctl +- [Installing with k0sctl](k0sctl-install.md) for deploying and upgrading multi-node clusters with one command - [Manual Install](k0s-multi-node.md) for advanced users for manually deploying multi-node clusters - [Control plane configuration options](configuration.md) for example for networking and datastore configuration - [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments diff --git a/docs/k0s-in-docker.md b/docs/k0s-in-docker.md index 0999bf8f9251..83cbddd0d2d9 100644 --- a/docs/k0s-in-docker.md +++ b/docs/k0s-in-docker.md @@ -4,7 +4,7 @@ In this tutorial you'll create a k0s cluster on top of docker. By default, both ### Prerequisites -Docker environment on top of Mac, Windows or Linux. [Get Docker](https://docs.docker.com/get-docker/). +Docker environment on Mac, Windows or Linux. [Get Docker](https://docs.docker.com/get-docker/). ### Container images @@ -84,8 +84,7 @@ Currently, we cannot run k0s nodes if the containers are configured to use custo ### Next Steps -- [Automated Cluster Setup](k0sctl-install.md) for deploying and upgrading multi-node clusters with k0sctl -- [Manual Install](k0s-multi-node.md) for advanced users for manually deploying multi-node clusters +- [Installing with k0sctl](k0sctl-install.md) for deploying and upgrading multi-node clusters with one command - [Control plane configuration options](configuration.md) for example for networking and datastore configuration - [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments - [Support for cloud providers](cloud-providers.md) for example for load balancer or storage configuration diff --git a/docs/k0s-multi-node.md b/docs/k0s-multi-node.md index 5a63f74ced93..7b56ef4686e5 100644 --- a/docs/k0s-multi-node.md +++ b/docs/k0s-multi-node.md @@ -4,6 +4,8 @@ In this tutorial you'll create a multi-node cluster, which is locally managed in ### Prerequisites +This tutorial has been written for Debian/Ubuntu, but it can be used for any Linux running one of the supported init systems: Systemd or OpenRC. + Before proceeding, make sure to review the [System Requirements](system-requirements.md). To speed-up the usage of `k0s` command, you may want to enable [shell completion](shell-completion.md). @@ -140,7 +142,7 @@ and paste it to LENS. Note that in the kubeconfig you need add your controller's ### Next Steps -- [Automated Cluster Setup](k0sctl-install.md) for deploying and upgrading multi-node clusters with k0sctl +- [Installing with k0sctl](k0sctl-install.md) for deploying and upgrading multi-node clusters with one command - [Control plane configuration options](configuration.md) for example for networking and datastore configuration - [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments - [Support for cloud providers](cloud-providers.md) for example for load balancer or storage configuration diff --git a/docs/k0sctl-install.md b/docs/k0sctl-install.md index b95e8abee6e0..186da9169f1e 100644 --- a/docs/k0sctl-install.md +++ b/docs/k0sctl-install.md @@ -1,4 +1,4 @@ -# Automated Cluster Setup Using k0sctl +# Installing with k0sctl This tutorial is based on k0sctl tool and it's targeted for creating a multi-node cluster for remote hosts. It describes an install method, which is automatic and easily repeatable. This is recommended for production clusters and the automatic upgrade requires using this install method. The automatic upgrade process is also described in this tutorial. @@ -170,7 +170,6 @@ INFO[0027] k0sctl kubeconfig ### Next Steps -- [Manual Install](k0s-multi-node.md) for advanced users for manually deploying multi-node clusters - [Control plane configuration options](configuration.md) for example for networking and datastore configuration - [Worker node configuration options](worker-node-config.md) for example for node labels and kubelet arguments - [Support for cloud providers](cloud-providers.md) for example for load balancer or storage configuration diff --git a/mkdocs.yml b/mkdocs.yml index 93cf95190359..7ac8c3b0da57 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -11,10 +11,10 @@ edit_uri: "" nav: - Overview: README.md - Install: - - Quick Start Guide: install.md - - Automated Cluster Setup: k0sctl-install.md - - Manual Install: k0s-multi-node.md + - Getting Started: install.md + - Installing with k0sctl: k0sctl-install.md - Alternative Install Methods: + - Manual Install: k0s-multi-node.md - Docker: k0s-in-docker.md - Windows (experimental): experimental-windows.md - Architecture: