Skip to content

Releases: pingcap/tiup

v1.2.0

29 Sep 08:47
1a4fbe7
Compare
Choose a tag to compare

New Features

Fixes

  • Fix tiup update --self results to tiup's binary file deleted (#816, @lucklove)
  • Fix per-host custom port for drainer not handled correctly on importing (#806, @AstroProfundis)
  • Fix the issue that help message is inconsistent (#758, @9547)
  • Fix the issue that dm not applying config files correctly (#810, @lucklove)
  • Fix the issue that playground display wrong TiDB number in error message (#821, @SwanSpouse)

Improvements

  • Automaticlly check if TiKV's label is set (#800, @lucklove)
  • Download component with stream mode to avoid memory explosion (#755, @9547)
  • Save and display absolute path for deploy directory, data dirctory and log directory to avoid confusion (#822, @lucklove)
  • Redirect DM stdout to log files (#815, @csuzhangxc)
  • Skip download nightly package when it exists (#793, @lucklove)

v1.1.2

11 Sep 12:35
Compare
Choose a tag to compare

Fixes

  • Fix the issue that tikv store leader count is not correct (#762)
  • Fix the issue that tiflash's data is not clean up (#768)
  • Fix the issue that tiup cluster deploy --help display wrong help message (#758)
  • Fix the issue that tiup-playground can't display and scale (#749)

v1.1.1

01 Sep 12:41
Compare
Choose a tag to compare

Fixes

  • Remove the username root in sudo command #731
  • Transfer the default alertmanager.yml if the local config file not specified #735
  • Only remove corresponed config files in InitConfig for monitor service in case it's a shared directory #736

v1.1.0

28 Aug 09:36
3988f6f
Compare
Choose a tag to compare

New Features

  • [experimental] Support specifying customized configuration files for monitor components (#712, @lucklove)
  • Support specifying user group or skipping creating a user in the deploy and scale-out stage (#678, @lucklove)
  • [experimental] Support rename cluster by the command tiup cluster rename <old-name> <new-name> (#671, @lucklove)

    Grafana stores some data related to cluster name to its grafana.db. The rename action will NOT delete them. So there may be some useless panel need to be deleted manually.

  • [experimental] Introduce tiup cluster clean command (#644, @lucklove):
    • Cleanup all data in specified cluster: tiup cluster clean ${cluster-name} --data
    • Cleanup all logs in specified cluster: tiup cluster clean ${cluster-name} --log
    • Cleanup all logs and data in specified cluster: tiup cluster clean ${cluster-name} --all
    • Cleanup all logs and data in specified cluster, excepting the prometheus service: tiup cluster clean ${cluster-name} --all --ignore-role prometheus
    • Cleanup all logs and data in specified cluster, expecting the node 172.16.13.11:9000: tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.11:9000
    • Cleanup all logs and data in specified cluster, expecting the host 172.16.13.11: tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.12
  • Support skipping evicting store when there is only 1 tikv (#662, @lucklove)
  • Support importing clusters with binlog enabled (#652, @AstroProfundis)
  • Support yml source format with tiup-dm (#655, @july2993)
  • Support detecting port conflict of monitoring agents between different clusters (#623, @AstroProfundis)

Fixes

  • Set correct deploy_dir of monitoring agents when importing ansible deployed clusters (#704, @AstroProfundis)
  • Fix the issue that tiup update --self may make root.json invalid with offline mirror (#659, @lucklove)

Improvements

  • Add advertise-status-addr for tiflash to support host name (#676, @birdstorm)

Release v1.0.9

03 Aug 11:21
41fbacf
Compare
Choose a tag to compare

tiup

  • Clone with yanked version #602
  • Support yank a single version on client side #602
  • Support bash and zsh completion #606
  • Handle yanked version when update components #635

tiup-cluster

  • Validate topology changes after edit-config #609
  • Allow continue editing when new topology has errors #624
  • Fix wrongly setted data_dir of tiflash when import from ansible #612
  • Support native ssh client #615
  • Support refresh configuration only when reload #625
  • Apply config file on scaled pd server #627
  • Refresh monitor configs on reload #630
  • Support posix style argument for user flag #631
  • Fix PD config incompatible when retrieving dashboard address #638
  • Integrate tispark #531 #621

Release v1.0.8

13 Jul 10:12
4276089
Compare
Choose a tag to compare

Risk Events

A critical bug that introduced in V1.0.0 had been fixed in v1.0.8.
if the user want to scale in some TiKV nodes with the command tiup cluster scale-in with tiup-cluster, TiUP may delete TiKV nodes by mistake, causing the TiDB cluster data loss
The root cause:

  1. while TiUP treats these TiKV nodes' state as tombstone by mistake, it would report an error that confuses the user.
  2. Then the user would execute the command tiup cluster display to confirm the real state of the cluster, but the display command also displays these TiKV nodes are in tombstone state too;
  3. what's worse, the display command will destroy tombstone nodes automatically, no user confirmation required. So these TiKV nodes were destroyed by mistake.

To prevent this, we introduce a more safe manual way to clean up tombstone nodes in this release.

  • Fix the bug that ctl working directory is different with TiUP (#589)
  • Introduce a more general way to config profile (#578)
  • cluster: properly pass --wait-timeout to systemd operations (#585)
  • Always match the newest store when matching by address (#579)
  • Fix init config with check config (#583)
  • Bugfix: patch can't overwrite twice (#558)
  • Request remote while local manifest expired (#560)
  • Encapsulate operation about meta file (#567)
  • Playground: fix panic if failed to start tiflash (#543)
  • Cluster: show message for impossible fix (#550)
  • Fix scale-in of tiflash in playground (#541)
  • Fix config of grafana (#535)

tiup v0.0.1

09 Mar 08:45
Compare
Choose a tag to compare
refine the download error message

Signed-off-by: Lonng <[email protected]>